Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on August 13, 2018

Researchers teach autonomous cars to deal with irrational humans


Researchers teach autonomous cars to deal with irrational humans

A team of researchers recently determined that both human drivers and autonomous vehicles tend to prioritize the safety of their own vehicle over those around it. People are selfish, and AI acts a lot like people, which means the solution is the same for both: we need to learn some courtesy.

Automating human behavior is always a bit of a brier patch, especially when heavy machinery is involved and lives are at stake. For example, we don’t want the machines imitating our road-rage – the image of a semi truck getting “angry” over being cut off conjures images of the film “Maximum Overdrive.”

But we do want the machines to imitate our ability to react to the unexpected. It’s well documented how Tesla’s Autopilot software – which is not intended to replace a human driver – failed to recognize a large truck because it was facing a direction the computer wasn’t expecting and was painted white. There’s a pretty good chance most human drivers wouldn’t have made the same mistake.

So where is the happy medium? How do we make driverless cars not only better at driving than us, but also better at dealing with our imperfect behavior on the roads? According to the researchers, who hail from the University of California at Berkeley, we change their motivations.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

In a recently published white paper the team states:

We propose that purely selfish robots that care about their safety and driving quality are not good enough. They should also be courteous to other drivers. This is of crucial importance since humans are not perfectly rational, and their behavior will be influenced by the aggressiveness of the robot cars.

We advocate that a robot should balance minimizing the inconvenience it brings to another driver, and that we can formalize inconvenience as the increase in the other driver’s cost due to the robot’s behavior to capture one aspect of human behavior irrationality.

It seems like the answer to the problem would simply be to put in some sort of Asimov’s law for cars: Always give humans the right of way. But, it’s pretty easy to imagine why that’s not the best answer.

Humans, as we mentioned before, are selfish. Once we find out that autonomous vehicles always err on the side of protecting humans we’ll simply exploit them – or their reward mechanism more specifically – so that we can always go first. In a world full of driverless cars the human driver would always be a priority, which means riding in a driverless car would, at least theoretically, always be slower and less efficient.

According to the researchers:

Selfishness has not been a problem with approaches that predict human plans and react to them, because that led to conservative robots that always try to stay out of the way and let people do what they want.

But, as we are switching to more recent approaches that draw on the game-theoretic aspects of interaction, our cars are starting to become more aggressive. They cut people off, or inch forward at intersections to go first. While this behavior is good sometimes, we would not want to see it all the time.

Basically, robots that just get out of the way and always hold the door open won’t ever get anywhere. And robots that try to maximize their “rewards” for accomplishing their “goals” are likely to end up becoming more aggressive as they gather more driving data – like humans. Neither solution seems optimal.

To fix this, the researchers came up with a way to measure and quantify the “courtesy” mechanism that some human drivers employ. After all, we’re not always screaming around the streets with our faces buried in our phones.

The way it works: Researchers use algorithms to perform cost/benefit analysis on the potential actions for both the human driver and the autonomous vehicle. They take into account three distinct scenarios when a human driver interacts with a driverless car:

  1. What the human could have done, had the robot car not been there.
  2. What the human could have done, had the robot car only been there to help the human.
  3. What the human could have done, had the robot car just kept doing what it was previously doing.

The team turns these scenarios into a math language that computers can understand and the algorithms do the rest. In essence, they’ve defined and quantified courtesy, and found a way to make AI consider it during optimization training. Instead of all or nothing, the researchers are teaching the AI to find a comfortable medium between aggressive and passive by being less human. This should make it easier for the robots to deal with the irrational things humans do when they drive.

This work is still in the early stages, but should go a long ways toward figuring out how to integrate machine and human drivers without exacerbating the problems we’re trying to solve.

H/t: Import AI 

Get the TNW newsletter

Get the most important tech news in your inbox each week.