
Abstract: Machine Learning and AI models are often considered as black boxes, and the lack of transparency in the working of these models lead to lack of trust and adoption. In order to promote end user trust Explainable AI (XAI) and model explainability techniques like LIME, SHAP are widely used. But are these methods really human friendly? Do we have any existing method or framework that can provide explainability to non-technical users? How can we use XAI to bridge the gap between AI and end users to promote AI adoption?
This session would address all these questions. The sessions is divided into 3 parts (15 mins each):
1. Conceptual understanding of XAI methods: (dimensions of explainability;pre-hoc/post-hoc methods; local/global methods; model-specific/model-agnostic methods;different types of explainability methods and so on)
2. Existing Python frameworks for model explainability with reference to hands on examples (LIME,SHAP,TCAV,DALEX,ExplainerDashboards,ALIBI, DICE)
3. ENDURANCE - End User Centric Artificial Intelligence (Understanding open challenges of XAI, Industry best practices, bridging the XAI gaps)
Bio: Bio Coming Soon!

Aditya Bhattacharya
Title
Explainable AI Researcher | KU Leuven
