Abstract: Modern artificial intelligence, and deep learning in particular, is extremely capable at learning predictive models from vast amounts of data. The expectation of many AI researchers as well as the general public is that AI will go from powering customer service chatbots to providing mental health services. That it will go from personalized advertisement to deciding who is given bail. That it will go from speech recognition to writing laws. The expectation is that AI will solve society’s problems by simply being more intelligent than we are. Implicit in this bullish perspective is the assumption that AI technology will naturally learn to reason from data. That is, the assumption that it can form trains of thought that “make sense”, similar to how a mental health professional, a judge, or a lawyer might reason about a case, or more formally, how a mathematician might prove a theorem. This talk will investigate the question whether this behavior can be learned from data, and how we can design the next generation of artificial intelligence techniques that can achieve such capabilities.
Bio: Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning, Knowledge Representation and Reasoning, and Artificial Intelligence in general. His work has been recognized with best paper awards from key artificial intelligence venues such as UAI, ILP, KR, and AAAI (honorable mention). He also serves as Associate Editor for the Journal of Artificial Intelligence Research (JAIR). Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.