Google’s London-based AI outfit DeepMind has created two different types of AI that can use their ‘imagination’ to plan ahead and perform tasks with a higher success rate than AIs without imagination. Sorry if I made you click because you wanted AIs predicted flying cars. I promise this is cool too.
In a post on their site, DeepMind researchers give a short review of “a new family of approaches for imagination-based planning.” The so-called Imagination-Augmented Agents, or I2As, use an internal ‘imagination encoder’ that helps the AI decide what are and what aren’t useful predictions about its environment.
The researchers argue that giving AI imagination is crucial for dealing with real-world environments, where it’s helpful to test a few possible outcomes of actions ‘in your head’ to predict which one is best.
Recently, DeepMind’s founder Demis Hassabis wrote a paper published in Neuron about how the development of general-purpose AI is dependent on understanding and encoding human abilities like imagination, curiosity, and memory into AI. With these papers, his company seems to be making headway in at least one of those areas.
The I2A ‘agents’ in the papers were tasked with different situations to test their predictive abilities, “including the puzzle game Sokoban and a spaceship navigation game.” Sokoban is a puzzle game in which a little alien has to push boxes into the right place – it can not pull though, so one wrong move can screw up the whole round.
To challenge the agent, the researchers had every level procedurally generated and only gave the agent one try to solve it, because “this encourages the agent to try different strategies ‘in its head’ before testing them in the real environment,” they wrote.
The agents ended up performing better than its imagination-less counterparts. They learned how to navigate the puzzles with less experience by extracting more information from their internal simulations. When the researchers added a ‘manager’ component that helped create a plan, it “learns to solve tasks even more efficiently with fewer steps.”
Of course the type of imagination described in these papers is nowhere near what humans are capable of, but it does show that AIs can and benefit from being able to efficiently imagine different scenarios before acting.
As Hassabis wrote in the Neuron paper, creating agents with an imagination that can rival what we can do “is perhaps the hardest challenge for AI research: to build an agent that can plan hierarchically, is truly creative, and can generate solutions to challenges that currently elude even the human mind.” But step by step, we might be getting there.