Who Lives and Who Dies When Artificial Intelligence Drives Your Car?

February 24, 2020 Topic: Technology Blog Brand: The Buzz Tags: AIArtificial IntelligenceCarsAutomobilesTechnologyEconomy

Who Lives and Who Dies When Artificial Intelligence Drives Your Car?

We can expect AI to perform as well as we do in difficult situations, and hopefully much better. But we can't expect it to play at amateur philosophy. We should be afraid if it did. Would you trust a car that thought deeply about issues of life and death? Or would you rather your car just got on with the driving, and left the big moral questions to the philosophers?

It's winter 2030, and your fully autonomous full-size SUV is cruising down a snowy country road. Unlike today's autopilot Teslas, your car doesn't have a gas pedal, breaks, steering wheel, or even a front windshield. The car is truly, fully autonomous, reflecting Level 5 automation, in the term of art. For safety, you're in a rear-facing seat, and to pass the time, you're watching Avengers: Endgame in classic 2D on your car's rear holoscreen. Sit back, relax, and leave the driving to AI.

Your car's onboard artificial intelligence (AI) computer has no problem navigating the arrow-straight road. Its LIDAR sensors detect a thick picket closely-planted live oaks on both sides, creating a tunnel-like passage, and its 5G car-to-car communication system has already established contact with the two-seat Smart microcar that is rapidly approaching in the opposite lane.

Suddenly, three kids on a sled slide through the trees and into the road, their six-foot Flexible Flyer bumping to a stop astride your lane. Your car has a decision to make, and there's no time for it to ask your opinion. Does it swerve left, plowing through the Smart car but giving both you and (maybe) the now not-so-smart driver a plausible chance at survival? Does it swerve right, hitting an oak tree head on and splitting your car (and you with it) in two? Or does it run over the kids?

In one form or another, this is the philosophical dilemma that obsesses AI ethicists. MIT has an entire website, the Moral Machine, inviting you to play the car and make these kinds of decisions for yourself. Should the car kill pedestrians or passengers? Kids or babies? Productive, employed adults or unproductive, unemployed babies? One passenger or three pedestrians? Five people or five cats (yes, really)? You get the idea. It's all kill, kill, kill. If you're anything like a normal person, stay away from sophomore philosophy classes.

These scenarios are the autonomous driving version of that sophomore philosophy staple, the trolley problem. The standard trolley problem gives you the opportunity to pull a lever that will shunt a trolley car from a track where five people are tied up onto a siding where only one person is tied up. Why are so many people tied up on trolley tracks? Don't ask. They just are. So are you willing to murder one person in order to save five? What if that one person is a baby? Or your mother? Or a cat?

The autonomous vehicle equivalent at least has the virtue that it really could happen. But turning it into a philosophy problem misrepresents the very nature of AI. The whole point is AI is algorithmic training through machine learning (ML). Driving algorithms learn how to drive by observing hundreds of thousands of hours of actual human driving, just as Google's search AI learns how to rank results by observing billions of choices made by human web users. Throw in some basic rules (drive in the center of the lane; obey speed limits; don't tailgate; don't fall asleep) and the driving algorithm has the potential to consistently outperform even the best human drivers.

So the AI solution to the kids on the sled will be the same as our own split-second reaction, only faster: slam on the breaks. Swerving into trees or oncoming traffic is a non-starter: no one is going to buy a car with a built-in suicide mode. Autonomous vehicles are not going to make ethical decisions. Neither would you, under the circumstances. Like you, your AI-powered SUV is just going to do its best to stop as soon as possible, and if that's unfortunate for the kids on the sled, well ... that's just too bad. They should have listened to their parents and kept off the road.

We can expect AI to perform as well as we do in difficult situations, and hopefully much better. But we can't expect it to play at amateur philosophy. We should be afraid if it did. Would you trust a car that thought deeply about issues of life and death? Or would you rather your car just got on with the driving, and left the big moral questions to the philosophers? Safely ensconced in the self-driving car of the future, go ahead and take a spin on MIT's Moral Machine. Just make sure you leave the driving to AI.

Salvatore Babones is an adjunct scholar at the Centre for Independent Studies and an associate professor at the University of Sydney.