A Data Science Playbook for Explainable AI – Navigating Predictive and Interpretable Models
A Data Science Playbook for Explainable AI – Navigating Predictive and Interpretable Models

Abstract: 

Model ethics, interpretability, and trust will be seminal issues in data science in the coming decade. This technical talk discusses traditional and modern approaches for interpreting black box models. Additionally, we will review cutting edge research coming out of UCSF, CMU, and industry. This new research reveals holes in traditional approaches like SHAP and LIME when applied to some deep net architectures and introduces a new approach to explainable modeling where interpretability is a hyperparameter in the model building phase rather than a post-modeling exercise. We will provide step-by-step guides that practitioners can use in their work to navigate this interesting space. We will review code examples of interpretability techniques and provide notebooks for attendees to download.

Bio: 

Josh Poduska is the Chief Data Scientist at Domino Data Lab. He has 18 years of experience in the analytics space, particularly with designing and implementing data science solutions in the manufacturing and public sector domains. He has also led data science teams and strategy for several analytical software companies. Josh has a Masters in Applied Statistics from Cornell University.