Introduction to Interpretability in Machine Learning

Abstract: 

Machine learning projects are rarely like a kaggle competition. It is thrilling to see your name jump up on the leaderboard which makes competitions exciting and dare I say addictive. However predictive power in real life is much less important than kaggle competitions would have you believe. Often it is just as important to understand why a model makes a certain prediction. The ‘why’ plays an important role during the model development phase as well as after deployment. Complex machine learning pipelines are difficult to debug and issues can go unnoticed. One way to help increase trust in the model during the development stage is to improve its interpretability. Subject matter experts usually can tell which features should be predictive in advance and if your model disagrees with their intuition, that’s a good reason to investigate further and it could point to mistakes in your pipeline. Explanations can be crucial after deployment too because often it is not enough to provide predictions only. If the model has significant impact (e.g., in finance or health care), explanations specific to each data point (e.g., a customer or a patient) are a must.

We will review a couple of methods and tools to calculate global and local explanations in this workshop. Global explanations provide an overview of your model and it answers questions like ‘Which features does my model rely on the most or least in general?’. Local explanations describe how much each feature contributes to the prediction of each data point. Local explanations are important if you need to measure the equity and fairness of your model.

We will start with linear models and we will discuss under what conditions the weights of linear models can be used as explanations. I will introduce the pros and cons of perturbation feature importance which is a model-agnostic approach to calculate global explanations. Some model-specific explanations will also be briefly discussed. And we will close with what I consider to be the state of the art technique to calculate local feature importances: the Shapley Additive Explanations.

I’ll provide a github repository with a reproducible conda environment and a jupyter notebook as slides. We will use python and packages like pandas, numpy, sklearn, XGBoost, and shap.

Bio: 

Andras Zsom is an assistant Professor of the Practice of Data Science and Director of Industry and Research Engagement at Brown University, Providence, RI. He works with high-level academic administrators to tackle predictive modeling problems, he collaborates with faculty members on data-intensive research projects, and he was the instructor of a data science course offered to the data science master students at Brown.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google