
Abstract: In this talk, we will explore the various ways in which Machine Learning (ML) systems can be broken, leading to incorrect predictions, bias, or even security vulnerabilities. We will discuss ten common ways in which ML systems can fail, including data poisoning attacks, adversarial examples, concept drift, and model inversion attacks.
We will also provide guidance on how to build robust ML systems that can withstand these challenges. This will include strategies such as collecting diverse and representative data, monitoring for concept drift, implementing model interpretability and explainability techniques, and using adversarial training to defend against attacks. Additionally, we will discuss the importance of incorporating ethical considerations into the design and deployment of ML systems.
Attendees will leave with a better understanding of the potential vulnerabilities in their ML systems and actionable steps to build more robust and trustworthy systems.
Bio: Bhakti is a Responsible AI Tech Lead at Google Research, where she develops fair, safe, and robust AI systems. She has spearheaded numerous projects at Google, including YouTube, Maps, Android, and Ads, making significant advancements to ensure that ML in these applications is fair, transparent, and safe for all. She is also a strong supporter of open-source technology and is the maintainer of several offerings in the TF Responsible AI toolkit, used globally by developers in the industry to make their ML workflows more responsible.

Bhaktipriya Radharapu
Title
ML Tech Lead, Responsible AI | Google
