Explainable Machine Learning – A Human Centric Perspective


Machine Learning and AI models are often considered as black boxes, and the lack of transparency in the working of these models lead to lack of trust and adoption. In order to promote end user trust Explainable AI (XAI) and model explainability techniques like LIME, SHAP are widely used. But are these methods really human friendly? Do we have any existing method or framework that can provide explainability to non-technical users? How can we use XAI to bridge the gap between AI and end users to promote AI adoption?

This session would address all these questions. The sessions is divided into 3 parts (15 mins each):

1. Conceptual understanding of XAI methods: (dimensions of explainability;pre-hoc/post-hoc methods; local/global methods; model-specific/model-agnostic methods;different types of explainability methods and so on)
2. Existing Python frameworks for model explainability with reference to hands on examples (LIME,SHAP,TCAV,DALEX,ExplainerDashboards,ALIBI, DICE)
3. ENDURANCE - End User Centric Artificial Intelligence (Understanding open challenges of XAI, Industry best practices, bridging the XAI gaps)


Bio Coming Soon!

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from Youtube
Consent to display content from Vimeo
Google Maps
Consent to display content from Google