A Human-Machine Collaboration Built on Trust and Accountability
A Human-Machine Collaboration Built on Trust and Accountability

Abstract: 

As humans and AI technologies collaborate more closely than ever before in crucial areas of economy and everyday lives, there is a need to establish accountability for joint action taken based on trust, human values, engineering principles and societal ethics. For examples: AI systems need to prime humans to take over before they reach their limits of design; humans should be able to ask AI systems to explain their suggestions before accepting them; incorporate mechanisms to hold a hybrid system (""self-driving under human instructions"") accountable when any unfortunate harm happens to humans (""homicide"") or property unambiguously. In this talk, we will discuss some of the opportunities and barriers to human-machine collaboration, and technological and policy advances to address them.

Bio: 

Dr. Biplav Srivastava is a Distinguished Data Scientist and Master Inventor at IBM's Chief Analytics Office. With over two decades of research experience in Artificial Intelligence, Services Computing and Sustainability, most of which was at IBM Research, Biplav is also an ACM Distinguished Scientist, AAAI Senior Member and IEEE Senior Member. His focus is on promoting goal-oriented, ethical, human-machine collaboration via natural interfaces using domain and user models, learning and planning. He applies these techniques in areas of social as well as commercial relevance with particular attention to issues of developing countries (e.g., transportation, water, health and governance). Biplav’s work has lead to many science firsts and high-impact commercial innovations ($B+), 150+ papers and 50+ US patents issued, and awards for papers, demos and hacks. He has interacted with commercial customers, universities and governments, been on multilateral bodies, and assisted business leaders on technical issues.