Interpretable AI: Can machine learning explain itself?
Interpretable AI: Can machine learning explain itself?


"AI is revolutionizing many industries with high-performance machine learning models using large swaths of data to generate very accurate predictions. But how much do we understand about those model predictions? Can we identify and quantify the drivers of performance? How much can we trust the outputs of a model?

Model interpretability attempts to address all these questions and many more. Model interpretability is drawing the attention of researchers, practitioners, industry experts and government regulators alike. This topic is an active area of research within the academic machine learning community, but also a critical issue for practical deployment of AI solutions in the industry.

In this talk, we explore the fundamental trade-offs that exist between model performance and interpretability when delivering advanced analytics solutions in practice. We explain how to think about model interpretability at multiple stages of building a practical machine learning solution and the choices that need to be made to strike the right balance between performance and interpretability.

We discuss several technical methodologies that enhance interpretability in machine learning, from tree interpreters for random forest models to locally interpretable model-agnostic explanations (LIME) that can be applied to a wide variety of models, including traditionally less interpretable models such us deep neural networks.

Finally, we describe practical applications and uses cases where model interpretability is a crucial component for creating advanced analytics solutions. We present a use case in the pharma space where using Real World Evidence can transform patient outcomes as well as impact R&D, trial planning and commercial strategy."


Jordi Diaz is a Principal Data Scientist at QuantumBlack, (a McKinsey Company). At QuantumBlack, Jordi helps companies use data to improve their performance and outlearn their rivals. He develops machine learning and AI solutions that solve challenging business problems and transform industries, from aerospace to automotive, from healthcare to sports. He is also the organizer of Boston Bayesians, a meet-up group for those interested in Bayesian methods for statistics and machine learning. Jordi has ten+ years of experience in data analysis, machine learning, and software engineering. Prior to QuantumBlack, Jordi held Data Science and Engineering positions at Pixability and Qualcomm. Jordi received his PhD in Electrical Engineering from the New Jersey Institute of Technology, where he researched statistical signal processing and information theory for wireless communications. He holds a Telecommunication Engineering Degree from UPC, Barcelona, Spain and was a fellow of the Advanced Study Program at M.I.T.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google