This article was published on July 12, 2018

AI can kick your ass at all these games


AI can kick your ass at all these games

Since the inception of artificial intelligence in the 1950s, we’ve been trying to find ways to measure progress in the field of AI. For many, the golden criteria for AI the Turing Test, an evaluation of whether a computer can exhibit human behavior. But the Turing Test only defines whether AI can fool humans, not compete with them, and it’s very hard to say how deep the Test goes.

A much better arena to test the extent of AI’s intelligence, many scientists believe, are games, domains where contestants can measure and compare their success and clearly determine which one performs better. For decades, we’ve pitted artificial intelligence algorithms against humans in various games with different rules and difficulties.

And one by one, AI has been mastering those games. Here’s a list of the most significant games AI has conquered in recent years, proving that it can perform on par or at a level superior to that of competent human players.

AI beats the world chess champion

Deep_Blue.jpg

For long, we believed chess is the ultimate test of artificial intelligence. John McCarthy, the scientist who coined the term “artificial intelligence” in the 1950s, once referred to chess as the “Drosophila of AI,” a reference to the breakthrough genetic research on fruit flies in the early 20th century.

Computer chess is almost as old as the modern AI itself, with the first iterations appearing as early as 1959. Several educational and scientific institutions have tried to create AI chess engines that could compete with humans. Chess games were an integral part of personal computers since they first appeared in the 1980s.

However, we had to wait until the mid-1990s before we saw the first artificially intelligent chess player that could compete with world champions. In 1996, Deep Blue, a chess-playing computer created by IBM, engaged in a series of chess matches against world champion Gary Kasparov, under standard regulations. Kasparov won three of the matches and the other two ended in a draw.

The next year, an upgraded Deep Blue beat Kasparov in a six-game rematch.

Kasparov described the defeat “a shattering experience.” In an essay for Time Magazine, he wrote, “I could feel—I could smell—a new kind of intelligence across the table,” in reference to a particularly clever move that Deep Blue did in the final round.

But compared to today’s dominant AI techniques, machine learning and deep learning, Deep Blue was dumb. It was powered by “good-old fashioned artificial intelligence” (GOFAI), a brute-force human-created logic that would test and evaluate every possible sequence of moves at every turn and choose the best one.

AI wins Jeopardy!

Watson Jeopardy.png

 

In 2011, IBM introduced its new “smart” computer, this time one that had natural language processing capabilities. Codenamed “Watson,” the artificially intelligent machine would later evolve to become the core of one of IBM’s most successful and profitable services.

Watson proved its mettle in Jeopardy!, where it went up against two human opponents, including Ken Jennings, best known for winning 74 games in a row on the famous TV quiz show. To compete in Jeopardy, Watson had to be able to understand natural language questions and find the knowledge associated with them through encyclopedic recalls. But those questions are often nuanced and contain hidden, convoluted meanings that are hard to find, which made it even harder.

But in the end, Watson proved its superiority, and after three matches, it had collected $77,147 in prizes against Jennings’ $24,000 and the $21,600 Brad Rutter, his other human opponent and another Jeopardy ace.

“I, for one, welcome our new computer overlords,” Jennings gracefully wrote on his video screen after being defeated by Jeopardy.

Watson used machine learning, a type of AI that replaces hard-coded logic with insights and patterns it gleans from large data sets and is more effective in fields where defining the rules are difficult. Its victory proved that AI could enter fields where problems were non-deterministic. Since then, Watson’s AI has entered many domains beyond games, including healthcare, cybersecurity, weather forecasting and more.

DeepMind’s AI masters Go and beats the world champion—and itself

Alphago-compressor.png

After Deep Blue defeated Kasparov in chess, an astrophysicist from Princeton remarked that it would at least take a hundred years before computers and AI could beat humans in the ancient Chinese game of Go. While Go has simpler playing rules than chess, it is far more difficult to master. Go players must learn to play the game at different levels, making short- and long-term decisions while placing their stones on the board. According to a study, Go has more possible moves than the number of atoms in the universe, making it the most sophisticated board game in the world.

However, less than two decades later, in 2016, DeepMind, a UK-based AI startup Google acquired in 2014, made history when its AI AlphaGo beat world Go champion Lee Sedol in a five-game match.

“From the very beginning of the game, there was not a moment in time when I felt that I was leading,” Lee Sedol said after the final game, in which AlphaGo played an especially clever move.

For AI to master Go in the same way that Deep Blue mastered chess, in a brute-force manner, it would have required impossible amounts of computing power. However, the scientists at DeepMind used deep learning to develop the skills of AlphaGo. By examining thousands of human-played games and playing hundreds more against human players, AlphaGo “learned” the common patterns that constituted successful tactics in the game.

DeepMind later developed AlphaGo Zero, a version of the same AI that could achieve—and surpass—the same results with “zero” human involvement (thus the name). The method used in the new iteration of AlphaZero was reinforcement learning, a technique in which the AI is given the basic rules and mechanics of the game and told to find its own way around. This means that AlphaGo Zero silently played against itself millions of times and learned the game Go.

After three days of training, AlphaGo Zero played 100 games against its own older version, which had defeated Lee Sedol—it won all 100 of them. AlphaGo Zero was later generalized to AlphaZero, an AI that mastered not only Go, but other games including chess and shogi.

Aside from making the point that AI can master games that were previously thought to be the exclusive domain of human intelligence, AlphaGo’s achievements have opened up the way to enter the use of AI in other domains such as managing large power grids.

AI out-bluffs its opponents in poker

poker-686981_1920.jpg

If chess and Go are games that are recognized for being epitomes of logical complexity, poker is a totally different beast, a game of luck, deception and bluffing, definitely a hard nut to crack for machines that are founded on mathematics and logic.

But in 2017, a team of researchers at Carnegie Mellon University developed Libratus, an AI system that played against four expert players of Texas Hold ‘Em poker, defeated them in a 20-day tournament that spanned over 120,000 hands of poker. Texas Hold ‘Em is an especially complex version of the poker that relies heavily on long-term betting strategies and game theory.

Dong Kim, one of Libratus’ opponents and of the best Texas Hold ‘Em players in the world, told Wired that halfway through the competition, he started to feel that the AI could see his cards. “I’m not accusing it of cheating,” he said. “It was just that good.”

Libratus learned to play the game without human help, using reinforcement learning to play against itself a trillion times. Its creators also integrated complementary AI systems into Libratus, including one that probed for and patched weaknesses in the main AI by monitoring human player behavior, and another, called “end-game solver,” which helped focus the attention of the main AI and obviate the need to review useless game scenarios.

Libratus was notable not only because it mastered a game that went beyond logic and reasoning, but because it could pave the way of using AI in settings such as political negotiations and auctions.

Playing real-time games

Optimized-Starcraft deepmind.png

One of the limits of all the games AI has conquered so far is their turn-based nature. Opponents must wait for their turn, and when it comes, they have enough time to think and plan their moves without worrying about what others might do. Since a few years ago, scientists have set their eyes on real-time video games, in which all contestants must act simultaneously.

The general idea is to give the AI the exact same information that a human player has. It won’t be able to access data available under the hood, as computer opponents in games usually have. Instead, it will have a video output that shows the game situation, the same way players do. Also, it will have to interact with the game in the exact same way that players do, which means by giving it input commands such as mouse clicks and keyboard buttons.

(In reality, the games are modified slightly to make it easier for the AI to understand the game content and send commands, but they basically provide it with the same amount of information and control that humans have.)

DeepMind is trying its luck with StarCraft 2, the popular real-time strategy game from Blizzard. The challenge of RTS games is that, first, the AI must make split-second decisions at the same time as its opponents, and second, it has incomplete data. Unlike board games, RTS games such as StarCraft don’t enable players to see the entire map at the same time.

Real-time games also have many more possibilities than turn-based board games. A chess-playing AI has to choose between 35 moves at every turn. In Go, the number of possibilities grows to 250 at each turn. In real-time games, the AI has to weigh and choose between thousands of possible moves in sub-second timeframes.

For the time being, DeepMinds’s AI still hasn’t been able to beat top human players in StarCraft 2, but that might soon come.

Another notable effort is that of OpenAI, the nonprofit AI research organization founded by Elon Musk. OpenAI has chosen Dota 2, a popular multiplayer fantasy battle game developed by Valve, as its challenge arena. OpenAI developed a group of five neural networks called “Open AI Five” that played 180 years’ worth of Dota 2 games against themselves every day to learn and master the game. Against amateur players, OpenAI Five has exhibited professional gameplay and strategic capabilities. Come August, we will see how it competes against pros in an official tournament in Canada.

What makes real-time games significant is that they will enable AI systems to be integrated in complex settings where information is limited and decisions have to be made in a time-critical fashion.

Why is it important for AI to play games?

At the surface, teaching AI to play games might seem like a waste of time and talent to prove a useless point. But games can be very important because they provide a safe environment to develop and test AI techniques. And all these game-playing AIs are finding their way into more practical fields.

Something else that makes games relevant is the fact that they’re limited domains, which makes them perfect for developing narrow AI. Narrow AI is artificial intelligence technologies designed to solve specific problems as opposed to general AI, which is supposed to perform abstract and general-purpose functions like the human mind.

I’m not a big fan of general AI, and I think the future of artificial intelligence lies in enhancing and combining narrow AI technologies to augment human intelligence and capabilities. General AI might be decades away, but narrow AI is the here and now and is finding its way into every aspect of our lives. We need all the help we can get—including games—to make sure it doesn’t do anything stupid.

This story is republished from TechTalks, the blog that explores how technology is solving problems… and creating new ones. Like them on Facebook here and follow them down here:

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with