Middle-east Arab News Opinion | Asharq Al-awsat

These Programmers are Trying to Teach Driverless Cars to do What’s Right | ASHARQ AL-AWSAT English Archive 2005 -2017
Select Page
Media ID: 55356219
Caption:

What policies should govern a self-driving car when it’s faced with an imminent crash — and should it prioritize the lives of the passengers sitting inside, or the many other people outside who may be affected by it?

It’s a complex question, one that people tend to answer differently depending on the circumstance. But some engineers are trying to approach it by showcasing several ways a driverless car could handle an object in the road.

In a new video, Stanford University researchers show that by tweaking their driverless car’s algorithm, they can get it to respond to an obstacle using three distinct tactics. Briefly, they include: stopping, passing narrowly around the object, and passing widely by cutting into the lane reserved for oncoming traffic. Each of these choices comes with its own set of trade-offs.

This is interesting on a couple of levels. First, what the video shows is the act of translating a decision about what ought to happen — a value judgment — into actual programming. Now, at its core, all driving is just a series of value judgments. Is it safe to turn? Can I make that light? But it’s so easy to think of driverless software packages as black boxes that we sometimes forget it’s real people who are making these programming decisions. So this video shows, rather than tells, exactly what it looks like to program ethics into a car.

That brings us to the second point of interest. The video reveals how each programming decision bears a tremendous amount of responsibility in determining the fate of the car’s occupants. What’s more, the range of choices that are available to the programmers even in the hypothetical scenario above — stop, pass closely, pass widely — raises questions about whether the researchers can ever fairly make those decisions on behalf of the user.

We’ll have more to say about the ethics of driverless cars in a future post, but for now, we’d like to know: What approach would you choose in the obstacle scenario, knowing that you might be making a choice about someone else’s life or death? Is there another option the researchers didn’t address in the video that you’d want to implement? Let us know in the comments.


Washington Post