Abstract: AI can embed human and societal bias and be then deployed at scale. Many algorithms are now being reexamined due to illegal bias. So how do you remove bias & discrimination in the machine learning pipeline? In this workshop you will learn the debiasing techniques that can be implemented by using the open source toolkit AI Fairness 360.
AI Fairness 360 (AIF360) is an extensible, open source toolkit for measuring, understanding, and removing AI bias. It contains the most widely used bias metrics, bias mitigation algorithms, and metric explainers from the top AI fairness researchers across industry & academia.
You will learn:
* how to measure bias in your data sets & models
* how to apply the fairness algorithms to reduce bias
* how to apply a practical use case of bias measurement & mitigation
Basic knowledge of machine learning, experience using Python (not required)
Bio: Margriet is the global data science developer advocacy focal at IBM. As a Data Scientist she has a passion for exploring different ways to work with and understand diverse data by using open-source tools. She is active in developer communities through attending and presenting at conferences and organising meetups. She has a background as a climate scientist researching large observational datasets of carbon uptake by forests and the output of global scale weather and climate models.