Translate

Search This Blog

Search Tool




Asian Markets at Close Report

European Markets at Close Report

Jun 24, 2016

NYT | Technology - June 24, 2016: When Machines Will Need Morals:

nytimes.com

Jim Kerstetter
 
You’re driving through an intersection and three people step into the road; the only way to avoid hitting them is to steer into a wall, possibly causing serious injury to yourself. Would you sacrifice yourself?

Now change the equation: There are four people in the road and you have two family members in the back seat. Do you still steer into the wall? Would it matter if one of the pedestrians was a baby? If one of them was old or very attractive or very overweight?
It is, to say the least, an unpalatable choice. Now imagine programming a robotic vehicle to make the choice. What would you tell it to do?
For the engineers working on self-driving cars, this is more than just an intellectual puzzle. It is a decision they will eventually have to program into their machines.
A new article in the magazine Science tried to determine what consumers would expect their autonomous vehicles to do. Not surprisingly, people told researchers they want machines that would try to save the greatest number of people.
But there was a caveat: Those surveyed also indicated that if they were actually in the self-driving vehicle, they’d appreciate it if the machine tried to save them first.
What’s the solution? A set of government requirements for autonomous car morality might be one way to go, though the people surveyed in the Science article say they are not keen on that. Manufacturers could also tailor morality to a buyer’s choice. Some might be selfless. Others? Not so much.
Can you teach a machine what the right thing to do is? First, humans would have to agree.