
Abstract: For many AI applications, a prediction is not enough. End-users need to understand the “why” behind a prediction to make decisions and take next steps. Explainable AI techniques today can provide some insight into what your model has learned, but recent research highlights the need for interactivity with XAI tools. End-users need to interact and test “what if” scenarios in order to understand and build trust with an AI system. In this talk, I’ll discuss what human-factors research tells us about human decision making and how users build trust (or lose trust) in systems. I’ll also present interaction design techniques that can be applied to XAI services design.
Background Knowledge:
They should be familiar with general concepts and XAI techniques. But it will largely be a talk appropriate to beginners in the XAI space.
Bio: Meg is currently a UX Researcher for Google Cloud AI and Industry Solutions, where she focuses her research on Explainable AI and Model Understanding. She has had a varied career working for start-ups and large corporations alike, in fields as varied as EdTech, weather forecasting, and commercial robotics. She has published and spoken on topics such as user research, information visualization, educational-technology design, human-robot interaction (HRI), and voice user interface (VUI) design. Meg is also a proud alumnus of Virginia Tech, where she received her Ph.D. in Human-Computer Interaction.