To Save the Most Lives, Deploy (Imperfect) Self-Driving Cars ASAP

A new report argues robots will kill fewer people than human drivers, even if the technology isn't foolproof yet.
Image may contain Footwear High Heel Clothing Shoe Apparel Electronics Phone Cell Phone Mobile Phone and Car
Getty Images

Cars crash a lot: Nearly 37,500 Americans died on the roads last year. Autonomous cars would crash less (for one thing, they don’t drink or text or yell at their kids in the backseat). But that doesn’t mean drivers are ready to give over the wheel.

“There will be a horrific crash, not long after the vehicles are introduced, because automobiles crash a lot,” says David Groves, a senior policy researcher at the RAND Corporation, a policy think-tank. “We are so numb and tolerant of the crashes that occur by the thousands all around us every year,” he says. “But the first autonomous vehicle crash is going to be extremely novel." In other words: Expect a freak out.

What then? Does a public backlash send potentially innovative tech spinning into disrepute or even obscurity, like it did with Three Mile Island and nuclear energy, or the Hidenburg disaster and airships? Those are the questions at the heart of new research published today by Groves and co-investigator Nidhi Kalra, a roboticist who heads up RAND’s Center for Decision Making Under Uncertainty.

The report addresses the doubts percolating around self-driving cars, but it's very clear that these things are coming. Just look at San Francisco; Tempe, Arizona; Michigan; Boston; Pittsburgh, Pennsylvania; or the secretive former Air Force base in California where Waymo conducts testing. But wide-scale deployment of autonomous vehicles hasn't actually happened yet, and regulators have a hard time knowing when totally self-driving cars will be ready to mix with human traffic.

The rational argument: Put them on the roads when they cause fewer deaths overall than human drivers. If humans cause 37,462 car deaths a year, and driverless cars cause 37,461, let ‘em roll. Counter-argument: The public will flip the first time one single person dies in a self-driving car accident, even if thousands of others have been “saved” by non-distracted, non-drunk robo-cars. (Witness the frenzy produced by the death of driver behind the wheel of a semiautonomous Tesla.) The engineers may not mind a less-than-perfect robot. The public will likely prove less forgiving.

Presumably, though, there will be some moment where it makes sense, public safety-wise, to let autonomous vehicles own the road. But when is that? The RAND researchers used an analytic method called robust decision making to try to put some intellectual rigor into the question.

Their conclusion sounds clichéd: Don’t let the perfect be the enemy of the good. But it’s meaningful, too. They conclude that tens or even hundreds of thousands of lives could be saved by self-driving cars, even if regulators allow less-than-perfect cars on the road. As Groves puts it, “Even though we can’t predict the future, we found it’s really hard to imagine a future where waiting for perfection doesn’t lead to really big opportunity costs in terms of fatalities.”

Hard-ish Numbers

Self-driving cars are obviously not perfect yet. In fact, we have a pretty clear sense of how not perfect they are. The 43 companies testing self-driving cars in California must submit public “disengagement reports,” noting every time a human driver intervenes while behind the wheel of a self-driving car. Last year’s reports show these cars are getting better, but aren’t all the way there: Waymo's cars averaged 5,128 miles between disengagements—pretty good!—while Mercedes-Benz did 1.8—not so great. Today, autonomous vehicles are about as good as a standard crappy driver. "You’re probably safer in a self-driving car than with a 16-year-old, or a 90-year-old," researcher Brandon Schoettle told WIREDin August. "But you’re probably significantly safer with an alert, experienced, middle-aged driver than in a self-driving car."

The researchers studied three basic scenarios. In one, autonomous vehicles get on the road when they’re just a bit better than the average human driver, about 10 percent safer. In another, AVs hit the streets later, when they’re 75 percent safer. In the third, AVs arrive when they’re nearly perfect, 90 percent better than human drivers. They found that, generally, the 10 percent safer scenario will save more human lives, faster—as many as 3,000 a year.

Add in driverless cars' ability to improve as a fleet—when one makes a particular screw-up, all its brethren can learn to avoid it—and you get a robust argument for putting these vehicles on the road sooner rather than later. The researchers even built an online tool, so you can play with the scenarios yourself.

#image: /photos/5a00e9e14e6d1743dde858d1||||||

But we don’t live in a purely utilitarian world, and human beings are really not into certain risks. “Society tolerates a significant amount of human error on our roads,” Gil Pratt, who heads up Toyota’s research institute, said earlier this year. “We are, after all, only human. On the other hand, we expect machines to perform much better.”

Numbers-based research could modulate the panic that will inevitably ensue when an autonomous vehicle inevitably kills a person. (You wouldn’t care much that robo-cars saved thousands of lives if the one kid it hit was yours.) “One of our hopes is that very existence of this evidence would temper a backlash based on overextended emotion,” says Kalra, Groves' co-investigator.

After all, even though the US Department of Transportation says 94 percent of fatal crashes are due to human error, the public keeps telling pollsters they’re wary of driverless cars. Fifty-six percent of Americans surveyed told the Pew Research Center that they would not want to take even one ride in a driverless vehicle. So the federal government keeps updating its rough guidelines on self-driving cars, and Congress is hammering out a bill to create a framework for regulations, with bipartisan support. Which means the policies created today—and a good public understanding of what the risks really look like, numerically—will determine the safety of the roads in the future.