This weekend, the world’s greatest Go player beat Google’s AlphaGo, an AI program developed by Google’s DeepMind unit.
Lee Se-Dol, the 33-year-old South Korean has been pitted against a machine in a game that is arguably the most technically challenging thing to take place on a board of squares.
AlphaGo had already won three of the five games in the $1 million series, making Se-Dol’s victory somewhat hollow.
Machines have already beaten us mere mortals at chess – way back in 1997 when IBM’s Deep Blue dispatched Garry Kasparov.
But why should any of this matter? After Deep Blue’s victory it was quickly dismantled – the only impact it had on its maker was a brief bump in share price.
Well there are several reasons why the tech community is so captivated by the implications for AlphaGo.
Deep Blue was built purely to play chess. Its sole purpose was to be able to calculate and process the 10 to the power of 120 (that’s 120 zeroes after it) possible moves in the game.
AlphaGo too has a deeply specific purpose: its artificial intelligence is designed to use different types of programs in harmony to help it deal with the trillions upon trillions of different outcomes involved in playing Go.
There is a long and fascinating report published in Nature that goes into detail about the new approach to learning AlphaGo takes.
But the potential of something like AlphaGo has wider implications. The company that made it, Deep Mind has been steadily releasing footage of its algorithm playing computer games.
But while the algorithm has been tailored to play in virtual environments like Go, the machine powering its learning has more general applications. Now, that’s not to say that AlphaGo is going to wake up one morning and decide to learn how to shoot a gun.
But the development of the intelligence required to beat Lee Se-Dol has come about quicker than we’ve expected.
Until just five months ago, computer mastery of Go, which has been played for more than 3,000 years by humans, was thought to be at least a decade off. A computer’s ability to play games has become a crucial measure of how far AI has come.
It demonstrates that a machine can execute an “intellectual” task better than humans. What’s unique about AlphaGo is that it’s been teaching itself by playing the game (against itself) millions of times over to learn where its weaknesses are and quickly correcting them. It taught itself how to go from an amateur player to world champion in less than a year.
“AI methods are progressing much faster than expected, (which) makes the question of the long-term outcome more urgent,” said AI expert Stuart Russell of the University of California’s Berkeley electrical engineering & computer sciences department.
“In order to ensure that increasingly powerful AI systems remain completely under human control… there is a lot of work to do,” he continued.
That means that the long held fantasy of a true form of general artificial intelligence is going to be upon humanity sooner than we thought.
Computer scientist Richard Sutton said, “I don’t think people should be scared… but I do think people should be paying attention.”
So while Lee Se-Dol is licking his wounds, the rest of us should start paying closer attention to the rise of the machines.