Abstract: As humans and AI technologies collaborate more closely than ever before in crucial areas of economy and everyday lives, there is a need to establish accountability for joint action taken based on trust, human values, engineering principles and societal ethics. For examples: AI systems need to prime humans to take over before they reach their limits of design; humans should be able to ask AI systems to explain their suggestions before accepting them; incorporate mechanisms to hold a hybrid system (""self-driving under human instructions"") accountable when any unfortunate harm happens to humans (""homicide"") or property unambiguously. In this talk, we will discuss some of the opportunities and barriers to human-machine collaboration, and technological and policy advances to address them.
Bio: Biplav Srivastava is a Professor of Computer Science at the AI Institute at the University of South Carolina. Previously, he was at IBM for nearly two decades in the roles of a Research Scientist, Distinguished Data Scientist and Master Inventor. Biplav is an ACM Distinguished Scientist, AAAI Senior Member, IEEE Senior Member and AAAS Leshner Fellow for Public Engagement on AI (2020-2021). His focus is on promoting goal-oriented, ethical, human-machine collaboration via natural interfaces using domain and user models, learning and planning. He applies these techniques in areas of social as well as commercial relevance with particular attention to issues of developing countries (e.g., transportation, water, health and governance). Biplav’s work has lead to many science firsts and high-impact commercial innovations ($B+), 150+ papers and 50+ US patents issued, and awards for papers, demos and hacks. He has interacted with commercial customers, universities and governments, been on multilateral bodies, and assisted business leaders on technical issues.