
Abstract: In the days where we have autonomous cars, drones, and automated medical diagnostics, we want to learn more about how to interpret the decisions made by the machine learning models. Having such information we are able to debug the models and retrain it in the most efficient way.
This talk is dedicated to managers, developers and data scientists that want to learn how to interpret the decisions made by machine learning models. We explain the difference between white and black box models, the taxonomy of explainable models and approaches to XAI. Knowing XAI methods is especially useful in any regulated company.
We go through the basic methods like the regression methods, decision trees, ensemble methods, and end with more complex methods based on neural networks. In each example, we use a different data set for each example. Finally, we show how to use model agnostic methods to interpret it and the complexity of the interpretability of many neural networks.
Bio: Karol Przystalski obtained a PhD degree in Computer Science in 2015 at the Jagiellonian University in Cracow. He is the CTO and founder of Codete where he's leading and mentoring teams as they work with Fortune 500 companies on data science projects. He also built a research lab for machine learning methods and big data solutions at Codete. Karol gives speeches and trainings in data science with a focus on applied machine learning in German, Polish, and English.