Abstract: While deep learning has driven impressive progress, one of the toughest remaining challenges is generalization beyond the training distribution. Few-shot learning is an area of research that aims to address this, by striving to build models that can learn new concepts rapidly in a more "human-like" way. While many influential few-shot learning methods were based on meta-learning, recent progress has been made by simpler transfer learning algorithms, and it has been suggested in fact that few-shot learning might be an emergent property of large-scale models. In this talk, I will give an overview of the evolution of few-shot learning methods and benchmarks from my point of view, and discuss the evolving role of meta-learning for this problem. I will discuss lessons learned from using larger and more diverse benchmarks for evaluation and trade-offs between different approaches, closing with a discussion about open questions.
Module 1: basic background on the formulation of few-shot learning, transfer learning, meta-learning
Module 2: overview of benchmarks and approaches for cross-domain few-shot learning and research findings
Module 3: overview of in-context learning, and discussion of few-shot learning as an emergent property in large models
After my session, attendees would have a high-level view on how the field of few-shot learning has evolved, both in terms of benchmarks used to track progress as well as methods designed to tackle the problem (and the role of meta-learning in particular).
Basic familiarity with deep learning
Bio: Eleni is a Research Scientist at Google DeepMind, based in London UK. She obtained her PhD from the University of Toronto, advised by Professors Richard Zemel and Raquel Urtasun. Her research is centered around creating methods that allow efficient and effective adaptation of deep neural networks to cope with distribution shifts, introduction of new concepts, or removal of outdated or harmful knowledge, falling in the areas of few-shot learning, meta-learning, domain adaptation and machine unlearning.