NextSharkNextShark.com

Should Self-Driving Cars Kill Their Passengers For the ‘Greater Good’?

Should Self-Driving Cars Kill Their Passengers For the ‘Greater Good’?

October 27, 2015
SHARE
An ultimate ethical dilemma is now being directed towards smart, self-driving cars: in a hypothetical collision, would a self-driving car sacrifice the driver’s life in order to save the lives of 10 other people, or vice versa?
A team of researchers, led by Jean-Francois Bonnefon from the Toulouse School of Economics, believe such scenarios are unavoidable and that the decisions regarding them will have far-reaching implications. In their paper, published in computer science journal Arxiv, the researchers wrote:  
“It is a formidable challenge to define the algorithms that will guide AVs [Autonomous Vehicles] confronted with such moral dilemmas.
“We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm.”
Subscribe to
NextShark's Newsletter

A daily dose of Asian America's essential stories, in under 5 minutes.

Get our collection of Asian America's most essential stories to your inbox daily for free.

Unsure? Check out our Newsletter Archive.

Several hundred people were surveyed via Amazon’s Mechanical Turk, an online crowdsourcing tool. They were posed with a scenario involving a self-driving car on a path for an inevitable collision with a group of 10 people on the side of the road. Participants were asked to choose between swerving and killing the driver in order to save the pedestrians or hitting the pedestrians in order to save the driver.
The results revealed that the majority of the people were willing to sacrifice the driver in order to save others. However, they were more likely to say so if they did not consider themselves the driver. In addition, 75% of test subjects agreed it would be moral to swerve, but only 65% of respondents thought cars would actually be programmed to do so.
Researchers were also faced with further moral questions such as:
“If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?
“Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today. As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”
h/t: IFLScience
MOST READ
    HAPPENING NOW
      Laura Dang

      Laura Dang is a contributor at NextShark

      SHARE THIS ARTICLE:

      RELATED STORIES FROM NEXTSHARK

      Support
      NextShark's
      Journalism

      Many people might not know this, but NextShark is a small media startup that runs on no outside funding or loans, and with no paywalls or subscription fees, we rely on help from our community and readers like you.

      Everything you see today is built by Asians, for Asians to help amplify our voices globally and support each other. However, we still face many difficulties in our industry because of our commitment to accessible and informational Asian news coverage.

      We hope you consider making a contribution to NextShark so we can continue to provide you quality journalism that informs, educates, and inspires the Asian community. Even a $1 contribution goes a long way. Thank you for supporting NextShark and our community.

      © 2023 NextShark, Inc. All rights reserved.