Human-centric AI news and analysis

Google algorithm teaches robot how to walk in mere hours

Google researchers developed algorithms that the robot used to walk independtly

A new robot has overcome a fundamental challenge of locomotion by teaching itself how to walk.

Researchers from Google developed algorithms that helped the four-legged bot to learn how to walk across a range of surfaces within just hours of practice, annihilating the record times set by its human overlords.

Their system uses deep reinforcement learning, a form of AI that teaches through trial and error by providing rewards for certain actions.

This technique is typically evaluated in virtual environments. However, building simulations that could replicate the robot walking on various surfaces would be highly complex and time-consuming, so the researchers chose to train their system in the real world.

[Read: ‘World’s strongest’ robotic hand can cut paper, hold eggs — and even play the piano]

They also had to reduce the need for human intervention when the robot fell over or left its training area. To do this, they made it perform multiple maneuvers simultaneously and restricted its movements within a set boundary. As a result, when it reached the edge of its territory, it would recognize the border and start walking backwards.

What the bot can teach us

They tested the system on flat ground, a soft mattress and a doormat with crevices. Within just hours on each service, it had learned to walk forwards and backwards, and to turn left and right, without the need for manual resets.

“Our system can learn to walk on these terrains in just a few hours, with minimal human effort, and acquire distinct and specialized gaits for each one,” the research team explained in a paper published this week.

Check out how they did it in the video below:

On flat ground, it learned the task in 1.5 hours. On the doormat, it took 4.5 hours, and on the mattress, 5.5.

After the robot had learned to walk, the researchers connected a video game controller that allowed them to manually move the robots using the techniques it had learned.

The researchers believe their system could help robots master the tricky art of locomotion — and it may even teach us something about how humans learn to walk too.

Published March 3, 2020 — 19:05 UTC