Raise awareness of environmental health issues in order to better protect our children and future generations.

EMF Studies

02 January 2017

Self-Driving Cars Are Already Deciding Who to Kill

© Provided by Business Insider Inc
Trolly problem vagrants
Self-driving cars are already deciding who to kill
by Gus Lubin, msn.com, 30 December 2016

Autonomous vehicles are already making profound choices about whose lives matter, according to experts, so we might want to pay attention.

"Every time the car makes a complex manoeuvre, it is implicitly making trade-off in terms of risks to different parties," Iyad Rahwan, an MIT cognitive scientist, wrote in an email.

The most well-known issues in AV ethics are trolly problems -- moral questions dating back to the era of trollies that ask whose lives should be sacrificed in an unavoidable crash. For instance, if a person falls onto the road in front of a fast-moving AV, and the car can either swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the pedestrian, what should it do?

Rahwan and colleagues have studied what humans consider the moral action in no-win scenarios (you can judge your own cases at their crowd-sourced project, Moral Machine).

While human-sacrifice scenarios are only hypothetical for now, Rahwan and others say they would inevitably come up in a world full of AVs.

Then there are the ethical questions that come up every day. For instance, how should AVs behave when passing a biker or pedestrian?

"When you drive down the street, you're putting everyone around you at risk," Ryan Jenkins, a philosophy professor at Cal Poly, told us. "[W]hen we're driving driving past a bicyclist, when we're driving past a jogger, we like to give them an extra bit of space because because we think it safer; even if we're very confident they we're not about to crash, we also realise that unexpected things can happen and cause us to swerve, or the biker might fall off their bike, or the jogger might slip and fall into the street."

And there's no easy answer to these questions.

"To truly guarantee a pedestrian's safety, an AV would have to slow to a crawl any time a pedestrian is walking nearby on a footpath, in case the pedestrian decided to throw themselves in front of the vehicle," Noah Goodall, a scientist with the Virginia Transportation Research Council, wrote by email.

Human drivers can answer ethical questions big and small using intuition, but it's not that simple for artificial intelligence. AV programmers must either define explicit rules for each of these situations or rely on general driving rules and hope things work out.

"On one hand, the algorithms that control the car may have an explicit set of rules to make moral tradeoffs," Rahwan wrote. "On the other hand, the decision made by a car in the case of unavoidable harm may emerge from the interaction of various software components, none of which has explicit programming to handle moral tradeoffs."

Even if programmers choose to keep things vague, a pattern of behaviour will be discernible in some instances or in overall statistics.

"In the words of Harvey Cox, 'not to decide is to decide,'" Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, wrote in an email.

Continue reading:
http://www.msn.com/en-ca/money/technologyinvesting/self-driving-cars-are-already-deciding-who-to-kill/ar-BBxIoBB?li=AAggFp5

No comments:

Post a Comment