Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on February 28, 2018

Google teaches AI to fool humans so it can learn from our mistakes


Google teaches AI to fool humans so it can learn from our mistakes

Fooling robots into seeing things that aren’t there, or completely mis-categorizing an image, is all fun and games until someone gets decapitated because a car’s autopilot feature thought a white truck was a cloud.

In order avoid such tragedies, it’s incredibly important that researchers in the field of artificial intelligence understand the very nature of these simple attacks and accidents. This means computers are going to have to get smarter. That’s why Google is studying the human brain and neural networks simultaneously.

So far, neuroscience has informed the field of artificial intelligence through endeavors such as the creation of neural networks. The idea is that what doesn’t fool a person shouldn’t be able to trick an AI.

A Google research team, which included Ian Goodfellow, the guy who literally wrote the book on deep learning, recently published its white paper: “Adversarial Examples that Fool both Human and Computer Vision.” The work points out that the methods used to fool AI into classifying an image incorrectly don’t work on the human brain. It posits that this information can be used to make more resilient neural networks.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Last year when a group of MIT researchers used an adversarial attack against a Google AI all they had to do was embed some simple code into an image. In doing so, that team convinced an advanced neural network it was looking at a rifle, when in fact it was seeing a turtle. Most children over the age of three would’ve known the difference.

Credit: MIT

The problem isn’t with Google’s AI, but with a simple flaw that all computers have: a lack of eyeballs. Machines don’t “see” the world, they simply process images – and that makes it easy to manipulate the parts of an image that people can’t see in order to fool them.

To fix the problem, Google is trying to figure out why humans are resistant to certain forms of image manipulation. And perhaps more importantly, it’s trying to discern exactly what it takes to fool a person with an image.

According to the white paper published by the team:

If we knew conclusively that the human brain could resist a certain class of adversarial examples, this would provide an existence proof for a similar mechanism in machine learning security.

Credit: Google
Left: original image of a cat. Right: The same image successfully manipulated to fool humans into thinking they’re seeing a dog.

In order to make people see the cat as a dog the researchers zoomed in and fudged some of the details. Chances are, it passes at-a-glance, but if you look at it for more than a few seconds it’s obviously a doctored-up image. The point the researchers are making is that it’s easy to fool humans, but only in some ways.

Credit: Google

Right now people are the undisputed champions when it comes to image recognition. But completely driverless cars will be unleashed on roadways around the world in 2018. AI being able to “see” the world, and all the objects in it, is a matter of life and death.

Want to hear more about AI from the world’s leading experts? Join our Machine:Learners track at TNW Conference 2018. Check out info and get your tickets here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top