Google has been pretty far ahead of the curve when it comes to its artificial intelligence research. The world was shocked when its AI beat a top human player at the game of Go. More recently the company taught AI to use imagination and make predictions. The latest trick in Google’s machine-learning research? Naps.
Google is making its AI more human — to a startling degree. It’s taught DeepMind how to sleep. In a recent blog post the company said:
At first glance, it might seem counter-intuitive to build an artificial agent that needs to ‘sleep’ – after all, they are supposed to grind away at a computational problem long after their programmers have gone to bed. But this principle was a key part of our deep-Q network (DQN), an algorithm that learns to master a diverse range of Atari 2600 games to superhuman level with only the raw pixels and score as inputs. DQN mimics “experience replay”, by storing a subset of training data that it reviews “offline”, allowing it to learn anew from successes or failures that occurred in the past.
DeepMind researchers are teaching computers how to learn. Neural-networks, AI, machine-learning algorithms – all the buzz words you’ve heard – what it boils down to is teaching a computer how to figure something out on its own.
Self-driving cars need to make decisions about traffic, data-analysis algorithms have to decide how to group information segments, and AI needs to be able to think like a person. Otherwise, what’s the point?
Google’s new method means even if a computer is using its full functional resources to figure a problem out, it can save information to dream about later, while it’s offline.
It doesn’t have to be working on a problem to solve it. It’ll fail at something, go offline, and then be able to succeed at that task once it’s back online.
In the future, when your computer goes into sleep-mode, it might be plotting it’s next victory.