Evaluating, Interpreting and Monitoring Machine Learning Models

Abstract: 

Machine learning (ML) models have caused a revolution in several fields, including, search and recommendation, finance, healthcare, and also fundamental sciences. Unfortunately, much of this progress has come with machine learning models getting more complex and opaque. Despite widespread deployment, the practice of evaluating models remains limited to computing aggregate metrics on held-out test sets. In this talk, I will argue how this practice can fall short of surfacing failure modes of the model that may otherwise show up during real world usage.

In light of this, I will discuss the importance of understanding model predictions by asking: why did the model make this prediction?  One approach to answering this question is to attribute predictions to input features — a problem that has received a lot of attention in the last few years. I will describe an attribution method, called Integrated Gradients (ICML 2017), that is applicable to a variety of Deep Neural Networks (object recognition, text categorization, machine translation, etc.), and is backed by an axiomatic justification. I will discuss an evaluation workflow based on feature attributions, and describe several applications of it. Finally, I discuss how attributions can be used for monitoring models in production. I will conclude with some caveats around using features attribution.

This talk is based on joint work with colleagues at Google.

Bio: 

Ankur Taly is a Staff Research Scientist at Google, where he carries out research in Machine Learning and Explainable AI. Previously, he served as the Head of Data Science at Fiddler labs, where he was responsible for developing, productionizing, and evangelizing core explainable AI technology. Ankur is most well-known for his contribution to developing and applying Integrated Gradients— a new interpretability algorithm for deep networks. His research in this area has resulted in publications at top-tier machine learning conferences and prestigious journals like the American Academy of Ophthalmology (AAO) and Proceedings of the National Academy of Sciences (PNAS). Besides explainable AI, Ankur has a broad research background and has published 30+ papers in areas including computer security, programming languages, formal verification, and machine learning. He has served on several academic conference program committees, and instructed short courses at summer schools and conferences. Ankur earned his PhD in computer science from Stanford University in 2012 and a BTech in Computer Science from IIT Bombay in 2007.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google