Abstract: In the past, AIs have been mere tools. Arguably, that is still true today. But in the (perhaps near) future, some AI systems may also be moral patients. That is to say, they may have various kinds and degrees of moral status. It will matter morally not only what they do to us, or what we do to each other using these AIs, but also what we do to them. This prospect presents us with formidable theoretical (and ultimately also practical) challenges, as we will need to develop ethical, legal, and political frameworks that can permit the peaceful and cooperative coexistence of a wide range of different morally considerable beings—including human beings, sentient nonhuman animals, and many and various kinds of AIs. Although we don’t currently have all or even most of the answers for how this should be done, we can already see that many applied moral principles, which are widely endorsed in the present human context, will need to be abandoned or modified in a context that also includes digital minds, owing to several ways in which the nature of digital minds differs from that of biological humans.
Bio: Nick Bostrom is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), which became a New York Times bestseller and sparked a global conversation about the future of AI. His academic work has been translated into more than 30 languages, and he is the world’s most cited philosopher aged 50 or under. He is a repeat main-stage TED speaker and he has been interviewed more than 1,000 times by various media. He has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15. Some of his recent work has focused on the ethics of digital minds. He has a book in the works on a topic yet to be disclosed.