Abstract: Ensuring that machine learning models truly understand the concepts that they wish to learn, and are able to make conceptual leaps to generalize to novel concepts is a major problem on the path to human-level AI. In this talk I will describe some of the work we have been doing in Facebook AI Research in pursuit of this problem. We will first describe a novel task, CURI which tests the conceptual leap ability of AI models. We then probe the question of concepts from a more foundational perspective, testing when models truly understand concepts and generalize out of distribution, when they work in a manner that demonstrates an understanding of objects in a scene or important / noteworthy waypoints in reinforcement learning. I will then conclude with thoughts on future work.
Bio: Ramakrishna Vedantam is a Research Scientist at Facebook AI Research (FAIR) in New York. Previously, he obtained his Ph.D. at the Georgia Institute of Technology (2018), an MS from Virginia Tech (2016) and did his undergraduate studies at IIIT, Hyderabad (2013). His research interest is in machine learning that mimics the capabilities of human learning and reasoning. He has been awarded the Google Ph.D. fellowship in Machine Perception and has received best reviewer awards at ICCV and CVPR.