Uncertainty in Deep Learning
Uncertainty in Deep Learning


Uncertainty is a critical consideration in any application of deep learning, particularly when safety is on the line. Conventional deep learning approaches lead to models that are overconfident, fragile under domain shift, helpless against adversarial attacks, and unable to convey their implicit understanding of uncertainty in data. This talk will survey methods for quantifying and handling different sources of uncertainty, with a focus on practical, scalable techniques for a variety of common use cases. From principled Bayesian approaches to post-hoc calibration, we’ll cover the theory, tools, techniques, and tips you need to better handle uncertainty in your own deep learning applications.

● PyTorch



Rebecca Russell is a Senior Machine Learning Scientist in the Perception and Autonomy group at Draper. She received her Ph.D. from MIT and B.S. from Caltech, both in physics. Since joining Draper in 2016, Dr. Russell has lead work on using deep learning to solve technical challenges in a wide variety of domains including robotics, autonomous vehicles, medical image analysis, and cybersecurity. Her current research is focused on creating trustworthy and competency-aware deep learning autonomous systems.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google