The Trolley Problem and Isaac Asimov’s First Law of Robotics

The Trolley Problem and Isaac Asimov’s First Law of Robotics

Erik Persson and Maria Hedlund

Lund University, Sweden

Abstract

How to make robots safe for humans is intensely debated, within academia as well as in industry, media and on the political arena. Hardly any discussion of the subject fails to mention Isaac Asimov’s three laws of Robotics. Asimov’s laws and the Trolley Problem are usually discussed separately but there is a connection in that the Trolley Problem poses a seemingly unsolvable problem for Asimov’s First Law, that states: A robot may not injure a human being or, through inaction, allow a human being to come to harm. That is, it contains an active and a passive clause and obliges the robot to obey both, while the Trolley Problem forces us to choose between these two options. The object of this paper is to investigate if and how Asimov’s First Law of Robotics can handle a situation where we are forced to choose between the active and the passive clauses of the law. We discuss four possible solutions to the challenge explicitly or implicitly used by Asimov. We conclude that all four suggestions would solve the problem but in different ways and with different implications for other dilemmas in robot ethics. We also conclude that considering the urgency of finding ways to secure a safe coexistence between humans and robots, we should not let the Trolley Problem stand in the way of using the First Law of robotics for this purpose. If we want to use Asimov’s laws for this purpose, we also recommend discarding the active clause of the First Law.

About the Authors:

Erik Persson is a philosopher with a Ph.D. in Practical Philosophy from Lund University. He has worked among other places at Umeå University, Research Institutes of Sweden (RISE), The Nordic Genetic Resource Center (NordGen), the Center for Theological Inquiry in Princeton, and the Pufendorf Institute for Advanced Studies in Lund. He is presently employed as Associate Professor of Practical Philosophy at Lund University. His main research interests are environmental ethics and philosophy, space humanities and the ethics and philosophy of emerging technologies, including AI, and robotics.

ORCiD: 0000-0002-5000-948X

https://www.researchgate.net/profile/Erik-Persson-3

Maria Hedlund is a political scientist focusing on democracy, experts, ethics, and responsibility in relation to emerging technologies such as biotechnology and AI. She has a Ph.D. in Political Science from Lund University, Sweden, where she also is an Associate Professor and holds a position as a Senior Lecturer. Her recent work is illuminating expert responsibility in the context of AI development, in particular the responsibility of ethics experts.

ORC iD 0000-0002-3101-5956

https://www.svet.lu.se/maria-hedlund


Published: 2024 – 07 – 01

Issue: Vol 7 (2024)

Section: Yearly Theme

Copyright (c) 2024 Erik Persson and Maria Hedlund

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Copyright for articles published in this journal is retained by the authors, with first publication rights granted to the journal. By submitting to this journal, you acknowledge that the work you submit has not been published before.

Articles and any other work submitted to this journal are published under an Attribution / Non-Commercial Creative Commons license; that is, by virtue of their appearance in this open access journal, articles are free to use – with proper attribution – in educational and other non-commercial settings.

There are no fees for authors publishing in the Journal.