Over the past year, Google and Facebook have been working on AI that can beat humans at the ancient Chinese board game Go, using deep neural networks that mimic the way our brains function.
Yesterday, Facebook CEO Mark Zuckerberg wrote that researchers at his company were “getting close” to teaching computers to win at Go. Today, Google showed off its AlphaGo system that defeated European champion Fan Hui five times in a row.
According to Google, it’s the first time a computer has ever beaten a professional Go player. It’s also a victory for the company over Facebook in the race to create AI that can tackle complex problems.
The game of Go has simple rules but millions of possible positions for its pieces — making it difficult for computers to master. It poses a unique and complex challenge to AI researchers who, in the past, have only been able to create systems that can play as well as an amateur.
AlphaGo combines an advanced tree search with deep neural networks that have different layers to process the state of the game and decide what move to make next.
Google said it trained its AI with 30 million moves from games played by human experts, and also created a way for AlphaGo to learn how to figure our new strategies on its own.
Now that it has defeated the top European pro, AlphaGo will take on the world champion Lee Sedol in a five-match challenge in March.
While Google’s achievement is already notable, defeating Sedol would be as monumental as IBM’s Deep Blue winning against Garry Kasparov at chess nearly 20 years ago.
Google DeepMind’s Demis Hassabis said, “Because the methods we’ve used are general-purpose, our hope is that one day they could be extended to help us address some of society’s toughest and most pressing problems, from climate modelling to complex disease analysis.”
You can read more about the company’s research in its paper published today in the scientific journal Nature.
➤ AlphaGo: using machine learning to master the ancient game of Go [Official Google Blog]