A Programmer’s Dilemma
Your life may depend on the answer.
Suppose you are a skilled artificial intelligence programmer working on the decision-making algorithm for a self-driving car. Most of the decisions are straight forward assuming the car has sufficient information. Stop for red lights. Stop rather than run over people, animals, or things. Accelerate to a safe (and legal?) speed at a rate which takes into account how well the tires are gripping the road. Turn the wheel in the direction of a skid. Pump the brakes when necessary.
Do you brake for deer? This one’s a little tougher. It depends on road conditions and assumptions about the ability of any car vehicle behind you to react to your braking. But the principle is clear; you do what’s best for the occupants of the car. You don’t hit a moose even if you have to brake suddenly because the moose’s barrel body will come through the windshield and kill someone.
Now the tough one. The car is on a narrow mountain road with a 3000 foot drop off to the left and a solid cliff on the right. It comes around a turn and there are four children unaccountably in the road. There is not enough space to stop or even slow down substantially. The car knows that. Going straight will kill the children. If the car turns into the cliff wall, it will careen off and still hit the children. The only way to save the children is to plunge off the road, which will almost surely kill the solo occupant (and owner of the car). The car can’t just give control back to the owner; there’s obviously not enough time.
Is the first rule of robotic cars to protect occupants? Or is the first rule to protect human life in general so it’s got to go with the least number of fatalities? Does the owner get to set preferences for decisions like this one? That’s not completely unreasonable since human drivers get to make their own decisions. How would you like to have to choose from these alternatives when you first set up your car?
- always save the lives of those outside the car rather than protecting occupants.
- always save the lives of occupants rather than protecting those outside the car.
- always save the greatest number of human lives.
- protect certain listed occupants (perhaps your children) at all costs.
- protect the lives of those least at fault in setting up the situation.
Etc. And what are the liability consequences of setting these preferences?
Should an ethical programmer insist that a car sold with his or her code in it have mouse print that spells out whether or not the car thinks it has to protect its driver at all costs? With a lot of work, code could be written so you could interview your car by giving it scenarios and asking it what it would do in each circumstance.
I have no idea how these decisions are being made today. I am sure that there are programmers who are dealing with them. I do not think the answer is to ban self-driving cars; I believe they will soon save many lives overall by being better drivers than humans – even though they will kill some people.
Comments