Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on June 2, 2018

Self-driving cars will kill people and we need to accept that


Self-driving cars will kill people and we need to accept that

Recently, headlines have been circulating speculation about what we need to do about the risks of self-driving vehicles. After one of its self-driving vehicles was responsible for a fatal crash, Uber has temporarily paused all autonomous vehicle testing in the state of Arizona. In its wake, Arizona Governor Douglas Ducey has reiterated his position to prioritize public safety as a top priority and has described the Uber accident as an “unquestionable failure” in preserving this priority.

Also recently, Tesla confirmed that a recent highway crash (which killed the driver of the vehicle) happened while the Tesla Autopilot system (a semi-autonomous feature) was controlling the car. This is the second accident in which the Tesla Autopilot system was at least partially at fault.

To many consumers, these incidents are a confirmation of something they suspected all along; trusting an AI system to handle driving is a mistake and one that’s destined to kill people. Self-driving cars, they therefore conclude, need to be heavily regulated and scrutinized, and potentially delayed indefinitely, until we can be sure that they’ll bring no harm to their drivers and passengers.

This is an inherently flawed view. It’s not a good thing that self-driving cars have killed people, but testing them in real-world situations is a necessary thing if we want to keep moving forward toward a safer, brighter future. And unless we want to jeopardize that future, we need to get over our fears.

Self-driving cars are going to kill people. Period.

First, we need to recognize that no matter what safeguards we put in place or how cautious we are with rolling out self-driving technology, autonomous vehicles are going to be involved in fatal collisions.

There are 325 million people in the United States and more than 260 million registered vehicles. Cars and pedestrians are constantly engaging in a world with random variables, from unexpected traffic patterns to crazy weather conditions to sudden falling objects obstructing the road. With a fleet of vehicles traveling millions of miles, it’s inevitable that some conditions could make an accident unavoidable—no matter how advanced the driving algorithm is.

No matter what, people are going to die at the “hands” of an autonomous vehicle.

The risk of human drivers

Next, we need to acknowledge just how bad human drivers are at controlling their own vehicles – and how they compare to autonomous vehicles. In 2016, there were 40,200 vehicular fatalities just in the United States. A Stanford review found that 90 percent of accidents are caused, at least in part, by human error, whether that’s overcorrecting, falling prey to a distraction, or drinking alcohol before getting behind the wheel. Some quick math tells you that’s 36,180 lives that were lost because a human behind the wheel of a car made a mistake, with similar numbers year over year.

Despite this, our standards for human testing are incredibly lax. Anyone can get a driver’s license, and the majority of the United States population either drives or rides in a car on a regular basis—even though your odds of dying in a car accident over a lifetime are something like 1 in 114, which is relatively high. Autonomous vehicles may actually already be capable of transporting us more safely than comparable human drivers.

Breaking eggs

Injuries and deaths are an unfortunately expectable part of any process that ultimately leads to greater safety. As the proverb goes, “you can’t make an omelet without breaking a few eggs.”

Let’s take another driving-related safety feature as an example here: airbags. The first airbags started emerging in the 1960s after an initial patent was filed in 1952. Designed to improve safety and reduce the risk of fatality in the case of collision, these airbags were vastly inferior to the airbags we know today and were responsible for many injuries (and even some deaths). Even today, over the past 10 years, there have been almost 200 fatal injuries as a result of airbag deployment.

However, it’s estimated that airbags have saved more than 44,800 lives. With every technological advancement in the airbag space, we save more lives and produce fewer injuries, and few people would argue that the airbag has been a “bad” or harmful innovation. The ideal scenario here is one in which nobody is killed, but since that’s extremely unlikely, I’ll assume most people agree that saving 44,800 lives is better than saving 200.

Early iterations of the autonomous vehicle may result in some loss of life, but even our most underdeveloped models will most likely be an improvement over a human driver’s ability.

Accepting the truth

Despite the numbers, many consumers and policymakers still want to push for higher standards for self-driving vehicles. The head of the National Highway Traffic Safety Administration (NHTSA), for example, believes autonomous vehicles need to be twice as safe as human drivers before they’re allowed to roam the streets. However, the opportunity cost of waiting that long means we’ll spend years without any improvement whatsoever; we’ll keep losing 40,000 annual lives instead of 35,000 or 30,000 or 25,000.

RAND Corp confirms some of this speculation, investigating the real opportunity cost of waiting to deploy autonomous vehicles until they’re 75 to 90 percent better than human drivers. In nearly all test conditions, a more permissive, flexible policy results in saving more lives than their more conservative counterparts — and in the long term, could save more than half a million lives.

The real trouble is, how can you convince a lawmaker — or even yourself — that this is the right choice to make? On paper, saving 500,000 lives over the course of a decade is a no-brainer, but when an innocent person is struck and killed by a robotic car, our instincts easily kick in to reject the idea.

The best we can do is accept that self-driving cars aren’t perfect, and acknowledge that they don’t have to be. Doctors don’t have to save every patient to be a net positive for their practice. Traffic lights can’t stop every collision but are still worth the investment.

Just because AI is involved doesn’t make this different. The end goal is to save lives, even though we might lose some to ultimately get there. It’s not a pleasant idea, but it’s one that should appeal to our moral imperative: to protect human life, however we can.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with