Practical Tutorial on Uncertainty and Out-of-distribution Robustness in Deep Learning


Deep neural networks can make overconfident errors and assign high confidence predictions to inputs far away from the training data. Well-calibrated predictive uncertainty estimates are important to know when to trust a model's predictions, especially for safe deployment of models in applications where the train and test distributions can be different. I'll first present some concrete examples that motivate the need for uncertainty and out-of-distribution (OOD) robustness in deep learning. Next, I'll present an overview of our recent work focused on building neural networks that know what they don’t know: this includes methods which improve single model uncertainty (e.g. spectral-normalized neural Gaussian processes), methods which average over multiple neural network predictions such as Bayesian neural nets and deep ensembles, and methods that leverage better representations (e.g. pre-trained transformers for improving “near-OOD” detection).

Session Outline:
The session will help the attendees develop intuitions for better understanding the problem and some simple techniques to improve performance in practice (e.g. demo colabs).

- Understand how to measure the quality of uncertainty and robustness.

- Understand how to improve uncertainty/robustness in a single model.

- Understand how to combine multiple models (ensembles, Bayesian NNs) to further improve uncertainty/robustness.

- Leverage recent advances in representation learning (pre-training, transformers, etc).

Some representative talks:

Practical recipes for improving uncertainty/robustness and building models that ""know what they don't know"":

Tutorial on uncertainty in deep learning at CIFAR summer school :


Balaji is currently a Staff Research Scientist at Google Brain working on Machine Learning and its applications. Previously, he was a research scientist at DeepMind for 4.5+ years. Before that, he received a PhD in machine learning from Gatsby Unit, UCL supervised by Yee Whye Teh.

His research interests are in scalable, probabilistic machine learning. More recently, he has focused on:
- Uncertainty and out-of-distribution robustness in deep learning
- Deep generative models including generative adversarial networks (GANs), normalizing flows and variational auto-encoders (VAEs)
- Applying probabilistic deep learning ideas to solve challenging real-world problems.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from Youtube
Consent to display content from Vimeo
Google Maps
Consent to display content from Google