Monday, December 07, 2015

Will Your Driverless Car Kill You So That Others May Live?

A new op-ed by me, in the Los Angeles Times (with the awesome illustration above, by Wes Bausmith, of car-as-consequentialist-philosopher.

I argue that programming the collision-avoidance software of an autonomous vehicle is an act of applied ethics, which we should bring into the open for the public to assess and for passengers to see and possibly modify within ethical limits.

--------------------------------------

It's 2025. You and your daughter are riding in a driverless car along Pacific Coast Highway. The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can't get traction. Your car does some calculations: If it continues braking, there's a 90% chance that it will kill at least three children. Should it save them by steering you and your daughter off the cliff?

This isn't an idle thought experiment. Driverless cars will be programmed to avoid collisions with pedestrians and other vehicles. They will also be programmed to protect the safety of their passengers. What happens in an emergency when these two aims come into conflict?

Should your autonomous vehicle risk your safety, perhaps even your life, because a reckless motorcyclist chose to speed around a sharp curve?

The California Department of Motor Vehicles is now trying to draw up safety regulations for autonomous vehicles. These regulations might or might not specify when it is acceptable for collision-avoidance programs to expose passengers to risk to avoid harming others — for example, by crossing the double-yellow line or attempting an uncertain maneuver on ice.

Google, which operates most of the driverless cars being street-tested in California, prefers that the DMV not insist on specific functional safety standards. Instead, Google proposes that manufacturers “self-certify” the safety of their vehicles, with substantial freedom to develop collision-avoidance algorithms as they see fit.

Continued here.

10 comments:

  1. Collision-avoidance algorithms for the individual, community and insurance industries...
    Finally, AI provides us with formulas for defining Ethics in our legal system...

    ReplyDelete
  2. Would we need to input driver info? What if driver was a renowned pediatric heart or brain surgeon? Hence if the car killed him to obey protocol to save lives, it'd be disobeying it. Or what if driver was racing the cure to cancer or AIDS or Ebola to the lab, and the cure was a formula he had memorized ? Killing him would result in letting millions die.

    ReplyDelete
  3. Interesting. However, I see one major problem in the situations you mentioned: predicting how others (people not in the driverless car) will react. How can the car predict that none of the children will jump in front of the car as it decides to head down the cliff? Same with the motorcyclist: which way will she try to evade the car? Sure algorithms will be able to see where objects/people are moving to and make quick decisions on what to do next. But to scan body language or eyes to make a prediction of where a person is planning to move is a bit too much science fiction even in my eyes.

    ReplyDelete
  4. Interesting. However, I see one major problem in the situations you mentioned: predicting how others (people not in the driverless car) will react. How can the car predict that none of the children will jump in front of the car as it decides to head down the cliff? Same with the motorcyclist: which way will she try to evade the car? Sure algorithms will be able to see where objects/people are moving to and make quick decisions on what to do next. But to scan body language or eyes to make a prediction of where a person is planning to move is a bit too much science fiction even in my eyes.

    ReplyDelete
  5. I believe fully program-controlled and autonomous vehicles cannot and should not be mixed with human controled vehicles of any kind as human action and reaction cannot be calculated by a computer. If all vehicles however were computer controlled then an interchange of data may be possible to establish an interaction between each unit within a certain perimeter. It would be far more efficient and faster to determine the action to be taken in case of an emergency.

    ReplyDelete
  6. How would Asimov's laws of robotics sway the ethics of your hypothetical case?

    ReplyDelete
  7. Interesting point, Dirk.

    We have to sort of second guess the other people on the road so as to work in some co-operation.

    A mix of robot and human driven cars mean we have to second guess robots as well - which is to say second guess manufacturers/people who aren't even on the road. Ie, people who suffer no ill consequence from any arrogance they might hold (whereas people in other cars might suffer a fender bender from being arrogant)

    I wont get into AI that actually think of things their manufacturer didn't think of.

    ReplyDelete
  8. Hi all -- sorry for the slow turnaround on comments. Hectic week!

    Maul P.: That seems to approach absurdity at some point, which I assume is your idea.

    Anon Dec 8: Neither are human beings perfect at such predictions. The question of whether the AI or the human driver would be more accurate is an empirical one, and my guess (expressed briefly in the article) is that in some situations the AI will do better and in others the human. We might even be able to predict which types are which (e.g., humans better at object recognition in novel, cluttered environments).

    Dirk: That might well be the future. But a mix seems a likely intermediate step.

    Howard: Asimov's laws are famous but don't bear serious scrutiny, which seems to come out in some of Asimov's later stories.

    Callan: That seems right to me.

    ReplyDelete
  9. A car with human passengers but not a human driver? Whose crazy idea is that? I can understand a driver not having to do anything most of the time, but that's about it.

    ReplyDelete
  10. Just a quick note in light of recent events: the current controversy over whether Apple should unlock a suspect's phone for the FBI seems to me to be directly analogous to the car killing you problem. A semi-intelligent system is doing things in the world which (a) has a serious impact and (b) cannot be blamed directly on the creators of the system.

    I see this as supporting my two main thoughts on this issue. (1) You don't need real AI for this to happen. (2) The decision over what to do is actually a legal decision. While tech people and philosophers can have some input, this is basically a social question, and the avenue through which it should be resolved is the same as for other social questions: the law. Anything else is a power grab.

    ReplyDelete