Presented at Data Science Sydney, March 2018
Abstract: Interpretability matters! Machine learning models have been getting more accurate, but also more complex over the past few years. However in many settings data scientists are often tethered to linear models or decision trees because they are easy (relatively!) to explain. Moreover developments in data ethics and governance are increasing the pressure on us data scientists to explain our models, and to ensure that discrimination or other unwanted outcomes are avoided. In this talk, Anthony will outline the latest and greatest in machine learning interpretability and explain why it is a crucial part of any data scientist's toolkit.