Abstract: The stories of AI gone wrong, specifically in the area of bias and fairness are everywhere. These stories and headlines increase the level of mistrust in AI. Companies are skeptical of implementing AI solutions and consumers are skeptical of the AI recommendations or solutions.
In this talk, we discuss ways to implement trustworthy AI, and show how machine learning can make the implicit bias of a human institution explicit. Bias becomes diagnosable, correctable, and ultimately preventable in a way that cannot be replicated in human decision-making, which is opaque and difficult to change. Bias is not new, but AI represents a new toolset to measure and change it.
Bio: Haniyeh is a Data Science Researcher at DataRobot's Trusted AI team. Her research focuses on bias, privacy, robustness and stability, and ethics in AI and Machine Learning. She has a demonstrated history of implementing ML and AI in a variety of industries and initiated the incorporation of bias and fairness feature into the DataRobot product. She is a thought leader in the area of AI bias and ethical AI. Haniyeh holds a PhD in Astronomy and Astrophysics from the Rheinische Friedrich-Wilhelms-Universität Bonn.