
Abstract: There are many types of users and stakeholders that require Explainable AI. Without explanations, end-users are less likely to trust and adopt ML-based technologies. Without a means of understanding model decision making, business stakeholders have a difficult time assessing the value and risks associated with launching a new ML-based product. And without insights into why an ML application is behaving in a certain way, application developers have a harder time troubleshooting issues, and ML scientists have a more difficult time assessing their models for fairness and bias. To further complicate an already challenging problem, the audiences for ML model explanations come from varied backgrounds, have different levels of experience with statistics and mathematical reasoning, and are subject to cognitive biases. They will also be relying on ML and Explainable AI in a variety of contexts for a variety of different tasks. Providing human understandable explanations for predictions made by complex ML models is undoubtedly a wicked problem. In this talk I’ll cover the human-factors that influence how explanations are interpreted and used by end-users. I’ll also present a framework for what to keep in mind when designing and assessing interpretable ML systems.
Bio: Meg is currently a UX Researcher for Google Cloud AI and Industry Solutions, where she focuses her research on Explainable AI and Model Understanding. She has had a varied career working for start-ups and large corporations alike across fields such as EdTech, weather forecasting, and commercial robotics. She has published articles on topics such as information visualization, educational-technology design, human-robot interaction (HRI), and voice user interface (VUI) design. Meg is also a proud alumnus of Virginia Tech, where she received her Ph.D. in Human-Computer Interaction (HCI).