When most people think about self-driving vehicles, they picture themselves sitting in the passenger seat with their legs up, arms folded behind their head, as the car autonomously drives itself from Point A to Point B. It could be cross-country, down the street.
Or, maybe they’re not even in the car, but just sent it to pick up their children from school. After all, autonomous vehicles are meant to ease travel, aren’t they?
Another conference. “Great.”
This one’s different, trust us. Our new event for New York is focused on quality, not quantity.
Today, many experts agree that combined function automated cars will hit the market within the next four years.
However, like Elon Musk, who’s confident that in just two years Tesla will have developed the fully automated self-driving cars, I believe that the technology to power this type of autonomous vehicle isn’t as far off as most think. Are we on the brink of accomplishing our science fiction fantasies?
I think it’s best, before I delve in further, to define the five types of autonomous cars:
Level 0: These are the everyday cars you drive to the office each morning. That is, if you bought a car before 2011. The driver is in complete control at all times.
Level 1: This is function-specific automation. The driver is still in complete control but can relinquish limited authority to cruise-control, automatic braking, and lane keeping. However, the driver can only relinquish one function at a time, meaning if his hands are off the wheel, they need to be on the pedal, and vice-versa. These are most cars today.
Level 2: This is combined function-automation. The driver can cede two or more automated primary controls at the same time, meaning his hands can be both off the wheel and off the pedal. Still, the driver is expected to be vigilant in case of emergencies because the car will not give advanced warning when driver authority is needed. This is where most self-driving cars are today, like the Tesla Autopilot.
Level 3: This is limited self-driving automation. The driver is expected to be somewhat vigilant, but ultimately cedes full safety control to the car. In some traffic or environment specific situations, the car will signal the driver to take over. This is the Volvo Intellisafe Autopilot.
Level 4: This is full self-driving automation, and what Elon Musk believes he can accomplish within the next two years. The driver is not expected to be vigilant at any time. It includes both occupied and unoccupied vehicles. In addition to Tesla, this is (nearly) the Google Car.
Legal and insurance issues are roadblocks
In contrast to most, who argue that innovation roadblocks are delaying the development and launch of this technology, I attribute the delays to human factors.
First, there are entrance barriers, like legal hurdles and regulation insurance, that technology companies must overcome. In December, for example, the Department of Motor Vehicles in California released draft regulations that not only require drivers to actually sit behind the steering wheels in self-driving vehicles, but also demand that the vehicles be equipped with steering wheels and pedals, too. This, of course, is a direct challenge to the prototype designed by Google.
Unfortunately, California, which is influential because of the size of its market and other factors, is setting an unaccommodating, contentious precedent for other government bodies wrestling with this innovation. This is sure to slow down technological development.
In addition to such legal obstacles, there are several insurance issues that are preventing innovators and investors from delving in head first.
For example, if an autonomous car must navigate through a messy and busy roundabout in Paris, should it bend the rules and drive on the shoulder or remain on the road as courtesy to other drivers? Or, if a road is blocked and the only way around it is to drive into a no-entrance, one-way street which causes an accident – who is responsible and who pays the damages, the owner or the car manufacturer?
Ultimately, once these legal and insurance issues are cleared up, autonomous vehicles should nearly be ready to take the streets.
Tech is nearing its final destination
In fact, most of the technology is here, it just needs to be further developed – deep learning algorithms and supercomputers, the two components necessary to power self-driving vehicles, are all well beyond their early stages.
In addition, vision recognition, vision mapping, and other well known sensors, also critical elements in this industry, are highly matured and even used in most cars today.
First, there’s the term machine learning – computer algorithms that can learn data and predict by data, find patterns, match best choices, and make decisions. In it, is a subsection dubbed deep learning: computer algorithms that recognize speech and images, which is necessary to power self-driving vehicles. Deep learning is the brain of the car.
Deep learning enables the vehicle to recognize other cars, pedestrians, environmental factors, traffic, street signs, etc., and second, generates and then implements a plan based on this information. For example, should the vehicle switch lanes, turn left, speed up, slow down, or – if there’s a pedestrian crossing the street when he shouldn’t – come to a complete halt?
In fact, most regular cars today are already equipped with powerful devices and technologies that use machine learning and image processing in order to enable humans to drive more safely.
In the US, for example, Mobileye, which is a technology company that develops vision-based advanced driver assistant systems, has partnered with Hyundai, Tesla, and several other auto companies, to place smart cameras in all of their vehicles.
This is something that’s happening today. Essentially, the cameras developed by Mobileye enable the vehicle to detect its surroundings, calculate the chances of a collision, and automatically brake if an accident is unavoidable.
If deep learning is the brain, then supercomputers are the heart.
The advanced technology and hardware, such as cameras, radars, laser scanners, ultrasonic sensors, etc., that goes into developing a self-driving vehicle, requires enormous processing power that can only be accomplished with a high performing supercomputer developed to crunch immense amounts of data at super fast speeds. Also, autonomous cars require massive data aggregation and analytics because they need to aggregate all this data all the time for purposes such as “black boxes,” continuous machine learning, maintenance, and additional services API.
It makes sense; there’s a lot going on on the road.
If the vehicle is driving on a six lane highway, and there are dozens of cars surrounding it, each moving at various speeds, then there must be hundreds, if not thousands of objects – which are probably multi terabytes of data – that would need to be processed, and quickly.
In utilizing a supercomputer, like the ones Nvidia has developed, autonomous cars would be able to handle, process, and analyze the data that comes with hardware add-ons I previously mentioned.
It’s clear then, that the powerful technologies intended to boost safety, experience, and performance are well on their way to supporting autonomous vehicles in terms of decision making, driving patterns, and other big data algorithms.
It’s unfortunate that legal restrictions are inhibiting the major players from continuing to advance their developments, but hopefully, the technology will soon become so overwhelmingly safe and beneficial, that lawmakers will have no choice but to reconsider.