Tristan GreeneEditor, Neural by TNW
Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: He/him
It’s time to add “AI research” to the list of things that machines can do better than humans. Google’s Alpha Go, the computer that beat the world’s greatest human go player, just lost to a version of itself that’s never had a single human lesson.
Google is making progress in the field of machine learning at a startling rate. The company’s AutoML recently dropped jaws with its ability to self-replicate, and DeepMind is now able to teach itself better than the humans who created it can.
DeepMind is the machine behind both versions of Alpha Go, with the latest evolution dubbed Alpha Go Zero — which sounds like the prequel to a manga.
The original Alpha Go is a monster of technology with 48 AI processors, and the data from thousands of go matches built into it. From the ground up, it was “born” with a pretty decent understanding of the game. Over time, and under the direction of humans, it began to learn the game and its nuanced strategies.
Eventually Alpha Go became so advanced, it was able to defeat the world’s top human player and establish AI’s supremacy in a game so difficult it makes chess look like checkers.
In short, Alpha Go is pretty legit.
The brilliant minds at Google decided being the best wasn’t good enough; they “evolved” Alpha Go into “Alpha Go Zero.” It was able to defeat Alpha Go at its own game only 40 days later.
Let that sink in.
Now here’s the shocking part: Alpha Go Zero has only four AI processors and the only data it was given was the rules of the game. Nobody taught it how to play or fed it thousands of matches to study.
According to Google’s blog:
This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge. Instead, it is able to learn tabula rasa from the strongest player in the world: AlphaGo itself.
The AI plays Go against itself, improving with every match. After millions of matches its strategy is, as far as humans are concerned, infallible. Both versions of the machine play the game at a level that’s considered superhuman.
The speed with which Google’s AutoML and DeepMind have taken “self learning” to the next level is wonderful and terrifying at the same time.
In order for AI to fullfill its promise to humanity it has to ease our burdens and free our minds to solve uniquely human problems. A version of DeepMind that – in a little over a month – can teach itself to outperform a previous iteration is the realization of that ideal.
It’s time we took Sundar Pichai’s assertion that Google is an AI company seriously.
Get the TNW newsletter
Get the most important tech news in your inbox each week.