Abstract: Machine Learning and AI models are often considered as black boxes, and the lack of transparency in the working of these models lead to lack of trust and adoption. In order to promote end user trust Explainable AI (XAI) and model explainability techniques like LIME, SHAP are widely used. But are these methods really human friendly? Do we have any existing method or framework that can provide explainability to non-technical users? How can we use XAI to bridge the gap between AI and end users to promote AI adoption?
This session would address all these questions. The sessions is divided into 3 parts (15 mins each):
1. Conceptual understanding of XAI methods: (dimensions of explainability;pre-hoc/post-hoc methods; local/global methods; model-specific/model-agnostic methods;different types of explainability methods and so on)
2. Existing Python frameworks for model explainability with reference to hands on examples (LIME,SHAP,TCAV,DALEX,ExplainerDashboards,ALIBI, DICE)
3. ENDURANCE - End User Centric Artificial Intelligence (Understanding open challenges of XAI, Industry best practices, bridging the XAI gaps)
Bio: Aditya Bhattacharya is an Explainable AI Researcher at KU Leuven with an overall experience of 7 years in Data Science, Machine Learning, IoT & Software Engineering. Prior to his current role, Aditya has worked in various roles in organizations like West Pharma, Microsoft & Intel to democratize AI adoption for industrial solutions. As the AI Lead at West Pharma, he had contributed to forming the AI Centre of Excellence, managing & leading a global team of 10+ members focused on building AI products. He also holds a Master’s degree from Georgia Tech in Computer Science with ML and a Bachelor’s degree from VIT University in ECE. Aditya is passionate about bringing AI closer to end- users through his various initiatives for the AI community. He has also authored a book “Applied Machine Learning Explainability Techniques”.