Should Self-Driving Cars Kill Their Passengers For the ‘Greater Good’?
By Laura Dang
An ultimate ethical dilemma is now being directed towards smart, self-driving cars: in a hypothetical collision, would a self-driving car sacrifice the driver’s life in order to save the lives of 10 other people, or vice versa?
A team of researchers, led by Jean-Francois Bonnefon from the Toulouse School of Economics, believe such scenarios are unavoidable and that the decisions regarding them will have far-reaching implications. In their paper, published in computer science journal Arxiv, the researchers wrote:
“It is a formidable challenge to define the algorithms that will guide AVs [Autonomous Vehicles] confronted with such moral dilemmas.
“We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm.”
Several hundred people were surveyed via Amazon’s Mechanical Turk, an online crowdsourcing tool. They were posed with a scenario involving a self-driving car on a path for an inevitable collision with a group of 10 people on the side of the road. Participants were asked to choose between swerving and killing the driver in order to save the pedestrians or hitting the pedestrians in order to save the driver.
The results revealed that the majority of the people were willing to sacrifice the driver in order to save others. However, they were more likely to say so if they did not consider themselves the driver. In addition, 75% of test subjects agreed it would be moral to swerve, but only 65% of respondents thought cars would actually be programmed to do so.
Researchers were also faced with further moral questions such as:
“If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?
“Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today. As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”
h/t: IFLScience
Share this Article
Share this Article