Google’s machine learning researchers have automated the automation again. The company last week showed off an algorithm tweak that gives robots foresight and caution, so they don’t require humans to reset them during learning sessions.
A deep learning network typically gains proficiency at a task, like controlling a robotic factory arm or keeping a car on the road, through repetition. This is called reinforcement training, and it’s powered by machine learning algorithms.
We don't shill.
Check out TNW's Hard Fork.
Google, armed with fancy new algorithms, has eliminated the need for a person to hit the ‘reset button’ when AI fails an experiment.
It might not seem monumental at first glance, but when you watch a stick figure use this upgraded knowledge to make decisions it may evoke a tiny emotional response. It’s hard not to feel bad for the dumb one.
This represents a significant upgrade in the field of experimental robotics.
The reason we have a real world version of Cortana from “Halo,” long before Rosie the Robot from “The Jetsons”, is that it’s easier to program AI to talk than to walk.
When your smart speaker needs a reset you just unplug it, but when a robot falls down a flight of stairs (or off a stage) the problem is much bigger.
The developers were able to solve this dilemma by creating a “forward policy” and a “reset policy.” The dueling algorithms tell the AI when it’s about to do something that it can’t recover from, like walk off a cliff, and stop it.
According to a white paper submitted by researchers at the Google Brain team, “by learning a value function for the reset policy, we can automatically determine when the forward policy is about to enter a non-reversible state, providing for uncertainty-aware safety aborts.”
And while most of us, geographically speaking, don’t have much use for an AI that’s just really good at not falling off cliffs, there’s a glimmer of the future in every new algorithm.
Robots aren’t ready for the world yet. Most of them wouldn’t be able to find an outlet to charge without an intern or grad student on hand. They’re a bit like toddlers at this point.
The least we can do, before we go filling robots full of AI and putting them in shopping malls and airports, is teach them how to exercise caution before attempting something dangerous.
We teach our children to look both ways before crossing the street, Google teaches its robots not to walk off cliffs (or into fountains, we hope).