Last week, Lee Se-dol, the South Korean Go champion who lost in a historical matchup against DeepMind’s artificial intelligence algorithm AlphaGo in 2016, declared his retirement from professional play.
“With the debut of AI in Go games, I’ve realized that I’m not at the top even if I become the number one through frantic efforts,” Lee told the Yonhap news agency. “Even if I become the number one, there is an entity that cannot be defeated.”
Predictably, Se-dol’s comments quickly made the rounds across prominent tech publications, some of them using sensational headlines with AI dominance themes.
Since the dawn of AI, games have been one of the main benchmarks to evaluate the efficiency of algorithms. And thanks to advances in deep learning and reinforcement learning, AI researchers are creating programs that can master very complicated games and beat the most seasoned players across the world. Uninformed analysts have been picking up on these successes to suggest that AI is becoming smarter than humans.
But at the same time, contemporary AI fails miserably at some of the most basic that every human can perform.
This begs the question, does mastering a game prove anything? And if not, how can you measure the level of intelligence of an AI system?
Take the following example. In the picture below, you’re presented with three problems and their solution. There’s also a fourth task that hasn’t been solved. Can you guess the solution?
You’re probably going to think that it’s very easy. You’ll also be able to solve different variations of the same problem with multiple walls, and multiple lines, and lines of different colors, just by seeing these three examples. But currently, there’s no AI system, including the ones being developed at the most prestigious research labs, that can learn to solve such a problem with so few examples.
The above example is from “The Measure of Intelligence,” a paper by François Chollet, the creator of Keras deep learning library. Chollet published this paper a few weeks before Le-sedol declared his retirement. In it, he provided many important guidelines on understanding and measuring intelligence.
Ironically, Chollet’s paper did not receive a fraction of the attention it needs. Unfortunately, the media is more interested in covering exciting AI news that gets more clicks. The 62-page paper contains a lot of invaluable information and is a must-read for anyone who wants to understand the state of AI beyond the hype and sensation.
But I will do my best to summarize the key recommendations Chollet makes on measuring AI systems and comparing their performance to that of human intelligence.
What’s wrong with current AI?
“The contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games,” Chollet writes, adding that solely measuring skill at any given task falls short of measuring intelligence.
In fact, the obsession with optimizing AI algorithms for specific tasks has entrenched the community in narrow AI. As a result, work in AI has drifted away from the original vision of developing “thinking machines” that possess intelligence comparable to that of humans.
“Although we are able to engineer systems that perform extremely well on specific tasks, they have still stark limitations, being brittle, data-hungry, unable to make sense of situations that deviate slightly from their training data or the assumptions of their creators, and unable to repurpose themselves to deal with novel tasks without significant involvement from human researchers,” Chollet notes in the paper.
Chollet’s observations are in line with those made by other scientists on the limitations and challenges of deep learning systems. These limitations manifest themselves in many ways:
- AI models that need millions of examples to perform the simplest tasks
- AI systems that fail as soon as they face corner cases, situations that fall outside of their training examples
- Neural networks that are prone to adversarial examples, small perturbations in input data that cause the AI to behave erratically
Here’s an example: OpenAI’s Dota-playing neural networks needed 45,000 years’ worth of gameplay to reach a professional level. The AI is also limited in the number of characters it can play, and the slightest change to the game rules will result in a sudden drop in its performance.
The same can be seen in other fields, such as self-driving cars. Despite millions of hours of road experience, the AI algorithms that power autonomous vehicles can make stupid mistakes, such as crashing into lane dividers or parked firetrucks.
What is intelligence?
One of the key challenges that the AI community has struggled with is defining intelligence. Scientists have debated for decades on providing a clear definition that allows us to evaluate AI systems and determine what is intelligent or not.
Chollet borrows the definition by DeepMind cofounder Shane Legg and AI scientist Marcus Hutter: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”
The key here is “achieve goals” and “wide range of environments.” Most current AI systems are pretty good at the first part, which is to achieve very specific goals, but bad at doing so in a wide range of environments. For instance, an AI system that can detect and classify objects in images will not be able to perform some other related tasks, such as drawing images of objects.
Chollet then examines the two dominant approaches in creating intelligence systems: symbolic AI and machine learning.
Symbolic AI vs machine learning
Early generations of AI research focused on symbolic AI, which involves creating an explicit representation of knowledge and behavior in computer programs. This approach requires human engineers to meticulously write the rules that define the behavior of an AI agent.
“It was then widely accepted within the AI community that the ‘problem of intelligence’ would be solved if only we could encode human skills into formal rules and encode human knowledge into explicit databases,” Chollet observes.
But rather than being intelligent by themselves, these symbolic AI systems manifest the intelligence of their creators in creating complicated programs that can solve specific tasks.
The second approach, machine learning systems, is based on providing the AI model with data from the problem space and letting it develop its own behavior. The most successful machine learning structure so far is artificial neural networks, which are complex mathematical functions that can create complex mappings between inputs and outputs.
For instance, instead of manually coding the rules for detecting cancer in x-ray slides, you feed a neural network with many slides annotated with their outcomes, a process called “training.” The AI examines the data and develops a mathematical model that represents the common traits of cancer patterns. It can then process new slides and outputs how likely it is that the patients have cancer.
Advances in neural networks and deep learning have enabled AI scientists to tackle many tasks that were previously very difficult or impossible with classic AI, such as natural language processing, computer vision and speech recognition.
Neural network-based models, also known as connectionist AI, are named after their biological counterparts. They are based on the idea that the mind is a “blank slate” (tabula rasa) that turns experience (data) into behavior. Therefore, the general trend in deep learning has become to solve problems by creating bigger neural networks and providing them with more training data to improve their accuracy.
Chollet rejects both approaches because none of them has been able to create generalized AI that is flexible and fluid like the human mind.
“We see the world through the lens of the tools we are most familiar with. Today, it is increasingly apparent that both of these views of the nature of human intelligence—either a collection of special-purpose programs or a general-purpose Tabula Rasa—are likely incorrect,” he writes.
Truly intelligent systems should be able to develop higher-level skills that can span across many tasks. For instance, an AI program that masters Quake 3 should be able to play other first-person shooter games at a decent level. Unfortunately, the best that current AI systems achieve is “local generalization,” a limited maneuver room within their own narrow domain.
The requirements of broad and general AI
In his paper, Chollet argues that the “generalization” or “generalization power” for any AI system is its “ability to handle situations (or tasks) that differ from previously encountered situations.”
Interestingly, this is a missing component of both symbolic and connectionist AI. The former requires engineers to explicitly define its behavioral boundary and the latter requires examples that outline its problem-solving domain.
Chollet also goes further and speaks of “developer-aware generalization,” which is the ability of an AI system to handle situations that “neither the system nor the developer of the system has encountered before.”
This is the kind of flexibility you would expect from a robo-butler that could perform various chores inside a home without having explicit instructions or training data on them. An example is Steve Wozniak’s famous coffee test, in which a robot would enter a random house and make coffee without knowing in advance the layout of the home or the appliances it contains.
Elsewhere in the paper, Chollet makes it clear that AI systems that cheat their way toward their goal by leveraging priors (rules) and experience (data) are not intelligent. For instance, consider Stockfish, the best rule-base chess-playing program. Stockfish, an open-source project, is the result of contributions from thousands of developers who have created and fine-tuned tens of thousands of rules. A neural network-based example is AlphaZero, the multi-purpose AI that has conquered several board games by playing them millions of times against itself.
Both systems have been optimized to perform a specific task by making use of resources that are beyond the capacity of the human mind. The brightest human can’t memorize tens of thousands of chess rules. Likewise, no human can play millions of chess games in a lifetime.
“Solving any given task with a beyond-human level performance by leveraging either unlimited priors or unlimited data does not bring us any closer to broad AI or general AI, whether the task is chess, football, or any e-sport,” Chollet notes.
This is why it’s totally wrong to compare Deep Blue, Alpha Zero, AlphaStar or any other game-playing AI with human intelligence.
Likewise, other AI models, such as Aristo, the program that can pass an eighth-grade science test, does not possess the same knowledge as a middle school student. It owes its supposed scientific abilities to the huge corpora of knowledge it was trained on, not its understanding of the world of science.
(Note: Some AI researchers, such as computer scientist Rich Sutton, believe that the true direction for artificial intelligence research should be methods that can scale with the availability of data and compute resources.)
The Abstraction Reasoning Corpus
In the paper, Chollet presents the Abstraction Reasoning Corpus (ARC), a dataset intended to evaluate the efficiency of AI systems and compare their performance with that of human intelligence. ARC is a set of problem-solving tasks that tailored for both AI and humans.
One of the key ideas behind ARC is to level the playing ground between humans and AI. It is designed so that humans can’t take advantage of their vast background knowledge of the world to outmaneuver the AI. For instance, it doesn’t involve language-related problems, which AI systems have historically struggled with.
On the other hand, it’s also designed in a way that prevents the AI (and its developers) from cheating their way to success. The system does not provide access to vast amounts of training data. As in the example shown at the beginning of this article, each concept is presented with a handful of examples.
The AI developers must build a system that can handle various concepts such as object cohesion, object persistence, and object influence. The AI system must also learn to perform tasks such as scaling, drawing, connecting points, rotating and translating.
Also, the test dataset, the problems that are meant to evaluate the intelligence of the developed system, are designed in a way that prevents developers from solving the tasks in advance and hard-coding their solution in the program. Optimizing for evaluation sets is a popular cheating method in data science and machine learning competitions.
According to Chollet, “ARC only assesses a general form of fluid intelligence, with a focus on reasoning and abstraction.” This means that the test favors “program synthesis,” the subfield of AI that involves generating programs that satisfy high-level specifications. This approach is in contrast with current trends in AI, which are inclined toward creating programs that are optimized for a limited set of tasks (e.g., playing a single game).
In his experiments with ARC, Chollet has found that humans can fully solve ARC tests. But current AI systems struggle with the same tasks. “To the best of our knowledge, ARC does not appear to be approachable by any existing machine learning technique (including Deep Learning), due to its focus on broad generalization and few-shot learning,” Chollet notes.
While ARC is a work in progress, it can become a promising benchmark to test the level of progress toward human-level AI. “We posit that the existence of a human-level ARC solver would represent the ability to program an AI from demonstrations alone (only requiring a handful of demonstrations to specify a complex task) to do a wide range of human-relatable tasks of a kind that would normally require human-level, human-like fluid intelligence,” Chollet observes.