Abstract: Uncertainty is a critical consideration in any application of deep learning, particularly when safety is on the line. Conventional deep learning approaches lead to models that are overconfident, fragile under domain shift, helpless against adversarial attacks, and unable to convey their implicit understanding of uncertainty in data. This talk will survey methods for quantifying and handling different sources of uncertainty, with a focus on practical, scalable techniques for a variety of common use cases. From principled Bayesian approaches to post-hoc calibration, we’ll cover the theory, tools, techniques, and tips you need to better handle uncertainty in your own deep learning applications.
Bio: Rebecca Russell is a Senior Machine Learning Scientist in the Perception and Autonomy group at Draper. She received her Ph.D. from MIT and B.S. from Caltech, both in physics. Since joining Draper in 2016, Dr. Russell has lead work on using deep learning to solve technical challenges in a wide variety of domains including robotics, autonomous vehicles, medical image analysis, and cybersecurity. Her current research is focused on creating trustworthy and competency-aware deep learning autonomous systems.