Tech companies and auto manufacturers are, in some cases independently and in some cases together, racing to see who can get the first autonomous vehicle to market. By “autonomous vehicle,” I’m referring to fully automated vehicles, which require no input from the user; there are already semi-autonomous vehicles on the market from Tesla, BMW, Mercedes-Benz, and others.
Google has famously been striving to complete a fully autonomous vehicle for years, bypassing the semi-autonomous route in favor of full autonomy, which it feels is safer; that project is now called Waymo, and collectively, those cars have driven more than 2 million miles. Intel recently acquired tech company Mobileye in an effort to join the race, and car service Uber is already launching a fleet of autonomous vehicles with human safety drivers behind the wheel.
All this competition seems like a good thing for the autonomous car’s future; it means more competitive prices, faster tech development, and ultimately, more consumer choices. However, this race to be the first to market may have dire consequences for the self-driving car’s future.
The Threat of Prototyping
Autonomous cars already have all the easy stuff worked out. Any tech company that’s spent at least a few years researching and development could hypothetically handle almost any “normal” condition with ease, and with greater safety and precision than a human driver—especially seeing how 94 percent of traffic collisions are attributable to human error. But it isn’t the common circumstances that are a concern; it’s the extremely rare, unpredictable, incalculable events that autonomous cars struggle with, and because of their rarity, they haven’t been seen in many experimental conditions.
In a bid to get the first car on the road, tech companies may skip investigating how to best handle these rare events, instead focusing on the 99 percent of environments they can feasibly handle. This is a flawed model, and will almost certainly lead to an accident that’s the fault of an autonomous car’s programming.
Singapore, which has been famously friendly to autonomous car companies, invited car service NuTonomy to operate within its limits, and within a month of launch, one of its taxis was involved in a minor collision.
This doesn’t seem like a big deal, but even if autonomous cars demonstrably cause fewer incidents than human drivers, we’re going to see the following three reactions:
- Stricter regulation. Currently, autonomous cars occupy a legal gray area, and policymakers aren’t sure how to handle them. Some states and regions have already demanded that self-driving cars be operated with a licensed driver behind the wheel, ready to take over at a moment’s notice. If an autonomous car-related incident arises, you can bet lawmakers will make it even tougher to get an autonomous vehicle on the road for consumers.
- Consumer distrust. While tech enthusiasts may be eager for autonomous cars to become more readily available, there’s no shortage of people who fear the reality of self-driving cars, either because they have legitimate safety concerns or because they’ve watched too many AI-takeover movies. In any case, one significant public incident would be enough to deter them from ever purchasing a self-driving car for their own use. And of course, without sufficient demand, the autonomous car industry would cease to exist.
- Corporate disinterest. Self-driving cars aren’t cheap. It’s taken the effort of thousands of people in dozens of companies, spending billions of dollars to invest in new research and development just to get to this point. If the hurdles of laws and consumer disinterest start to interfere with that investment, some companies may simply decide to cut their losses. Companies that depend on vehicles to operate, like Uber, may hedge their bets by bulking up their fleet of human drivers. Alternatively, they may choose to invest in a different type of driving technology, such as semi-autonomous systems, but those present their own unique challenges.
Is Semi-Autonomy the Way to Go?
With these potential reactions to a failed fully autonomous vehicle in mind, you might think that a better approach would be to master semi-autonomous driving, with some autonomous features that a human driver can easily override in complex environments and circumstances. However, this is problematic; human drivers are meant to be the safety mechanism for these vehicles, and we’re already arrogant and easily distracted. The simple presence of a smartphone is enough for some people to stop paying attention to the road entirely. In a semi-autonomous vehicle, which could remain in control for the majority of the ride, we’d be lulled into a false sense of security, and may be even more prone to unsafe driving conditions. Simply being in the driver’s seat wouldn’t be enough; you’d have to remain alert at all times, even though you aren’t in direct control of the vehicle.
This could create a devastating feedback loop for autonomous technology; as more restrictions are imposed, drivers could gain more control over autonomous car features, which in turn could cause more accidents.
There’s no easy answer for the autonomous car race. Hopefully, the tech companies and automakers furthest along in development will be the first to roll out their products to consumers, and they’ll be ridiculously safe and free from incident in the months following their launch. That’s the optimistic viewpoint. However, even under a worst-case scenario, autonomous vehicles probably won’t be completely dismissed; they’ll just be delayed, until we can figure out a way to resolve all the problems associated with them.
This post is part of our contributor series. It is written and published independently of TNW.