Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on May 8, 2021

4 ideas about AI that even ‘experts’ get wrong

No people, we aren't remotely close to achieving artificial general intelligence yet


4 ideas about AI that even ‘experts’ get wrong Image by: Possessed Photography on Unsplash

The history of artificial intelligence has been marked by repeated cycles of extreme optimism and promise followed by disillusionment and disappointment. Today’s AI systems can perform complicated tasks in a wide range of areas, such as mathematics, games, and photorealistic image generation. But some of the early goals of AI like housekeeper robots and self-driving cars continue to recede as we approach them.

Part of the continued cycle of missing these goals is due to incorrect assumptions about AI and natural intelligence, according to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide For Thinking Humans.

In a new paper titled “Why AI is Harder Than We Think,” Mitchell lays out four common fallacies about AI that cause misunderstandings not only among the public and the media, but also among experts. These fallacies give a false sense of confidence about how close we are to achieving artificial general intelligence, AI systems that can match the cognitive and general problem-solving skills of humans.

Narrow AI and general AI are not on the same scale

The kind of AI that we have today can be very good at solving narrowly defined problems. They can outmatch humans at Go and chess, find cancerous patterns in x-ray images with remarkable accuracy, and convert audio data to text. But designing systems that can solve single problems does not necessarily get us closer to solving more complicated problems. Mitchell describes the first fallacy as “Narrow intelligence is on a continuum with general intelligence.”

“If people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general AI,” Mitchell writes in her paper.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

For instance, today’s natural language processing systems have come a long way toward solving many different problems, such as translation, text generation, and question-answering on specific problems. At the same time, we have deep learning systems that can convert voice data to text in real-time. Behind each of these achievements are thousands of hours of research and development (and millions of dollars spent on computing and data). But the AI community still hasn’t solved the problem of creating agents that can engage in open-ended conversations without losing coherence over long stretches. Such a system requires more than just solving smaller problems; it requires common sense, one of the key unsolved challenges of AI.

The easy things are hard to automate

Vision, one of the problems every living being solves without effort, remains a challenge for computers
Credit: Ben Dickson
Vision, one of the problems every living being solves without effort, remains a challenge for computers

When it comes to humans, we would expect an intelligent person to do hard things that take years of study and practice. Examples might include tasks such as solving calculus and physics problems, playing chess at grandmaster level, or memorizing a lot of poems.

But decades of AI research have proven that the hard tasks, those that require conscious attention, are easier to automate. It is the easy tasks, the things that we take for granted, that are hard to automate. Mitchell describes the second fallacy as “Easy things are easy and hard things are hard.”

“The things that we humans do without much thought—looking out in the world and making sense of what we see, carrying on a conversation, walking down a crowded sidewalk without bumping into anyone—turn out to be the hardest challenges for machines,” Mitchell writes. “Conversely, it’s often easier to get machines to do things that are very hard for humans; for example, solving complex mathematical problems, mastering games like chess and Go, and translating sentences between hundreds of languages have all turned out to be relatively easier for machines.”

Consider vision, for example. Over billions of years, organisms have developed complex apparatuses for processing light signals. Animals use their eyes to take stock of the objects surrounding them, navigate their surroundings, find food, detect threats, and accomplish many other tasks that are vital to their survival. We humans have inherited all those capabilities from our ancestors and use them without conscious thought. But the underlying mechanism is indeed more complicated than large mathematical formulas that frustrate us through high school and college.

Case in point: We still don’t have computer vision systems that are nearly as versatile as human vision. We have managed to create artificial neural networks that roughly mimic parts of the animal and human vision system, such as detecting objects and segmenting images. But they are brittle, sensitive to many different kinds of perturbations, and they can’t mimic the full scope of tasks that biological vision can accomplish. That’s why, for instance, the computer vision systems used in self-driving cars need to be complemented with advanced technology such as lidars and mapping data.

[Read: How to use AI to better serve your customers]

Another area that has proven to be very difficult is sensorimotor skills that humans master without explicit training. Think of the how you handle objects, walk, run, and jump. These are tasks that you can do without conscious thought. In fact, while walking, you can do other things, such as listen to a podcast or talk on the phone. But these kinds of skills remain a large and expensive challenge for current AI systems.

“AI is harder than we think, because we are largely unconscious of the complexity of our own thought processes,” Mitchell writes.

Anthropomorphizing AI doesn’t help

Comparing contemporary AI systems with human intelligence creates an erroneous image of the current state of artificial intelligence
Credit: Icons8
Comparing contemporary AI systems with human intelligence creates an erroneous image of the current state of artificial intelligence

The field of AI is replete with vocabulary that puts software on the same level as human intelligence. We use terms such as “learn,” “understand,” “read,” and “think” to describe how AI algorithms work. While such anthropomorphic terms often serve as shorthand to help convey complex software mechanisms, they can mislead us to think that current AI systems work like the human mind.

Mitchell calls this fallacy “the lure of wishful mnemonics” and writes, “Such shorthand can be misleading to the public trying to understand these results (and to the media reporting on them), and can also unconsciously shape the way even AI experts think about their systems and how closely these systems resemble human intelligence.”

The wishful mnemonics fallacy has also led the AI community to name algorithm-evaluation benchmarks in ways that are misleading. Consider, for example, the General Language Understanding Evaluation (GLUE) benchmark, developed by some of the most esteemed organizations and academic institutions in AI. GLUE provides a set of tasks that help evaluate how a language model can generalize its capabilities beyond the task it has been trained for. But contrary to what the media portray, if an AI agent gets a higher GLUE score than a human, it doesn’t mean that it is better at language understanding than humans.

“While machines can outperform humans on these particular benchmarks, AI systems are still far from matching the more general human abilities we associate with the benchmarks’ names,” Mitchell writes.

A stark example of wishful mnemonics is a 2017 project at Facebook Artificial Intelligence Research, in which scientists trained two AI agents to negotiate on tasks based on human conversations. In their blog post, the researchers noted that “updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating [emphasis mine].”

This led to a stream of clickbait articles that warned about AI systems that were becoming smarter than humans and were communicating in secret dialects. Four years later, the most advanced language models still struggle with understanding basic concepts that most humans learn at a very young age without being instructed.

AI without a body

Can intelligence exist in isolation from a rich physical experience of the world? This is a question that scientists and philosophers have puzzled over for centuries.

One school of thought believes that intelligence is all in the brain and can be separated from the body, also known as the “brain in a vat” theory. Mitchell calls it the “Intelligence is all in the brain” fallacy. With the right algorithms and data, the thinking goes, we can create AI that lives in servers and matches human intelligence. For the proponents of this way of thinking, especially those who support pure deep learning–based approaches, reaching general AI hinges on gathering the right amount of data and creating larger and larger neural networks.

Meanwhile, there’s growing evidence that this approach is doomed to fail. “A growing cadre of researchers is questioning the basis of the ‘all in the brain’ information processing model for understanding intelligence and for creating AI,” she writes.

Human and animal brains have evolved along with all other body organs with the ultimate goal of improving chances of survival. Our intelligence is tightly linked to the limits and capabilities of our bodies. And there is an expanding field of embodied AI that aims to create agents that develop intelligent skills by interacting with their environment through different sensory stimuli.

Mitchell notes that neuroscience research suggests that “neural structures controlling cognition are richly linked to those controlling sensory and motor systems, and that abstract thinking exploits body-based neural ‘maps.’” And in fact, there’s growing evidence and research that proves feedback from different sensory areas of the brain affects both our conscious and unconscious thoughts.

Mitchell supports the idea that emotions, feelings, subconscious biases, and physical experience are inseparable from intelligence. “Nothing in our knowledge of psychology or neuroscience supports the possibility that ‘pure rationality’ is separable from the emotions and cultural biases that shape our cognition and our objectives,” she writes. “Instead, what we’ve learned from research in embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world. It’s not at all clear that these attributes can be separated.”

Common sense in AI

Developing general AI needs an adjustment to our understanding of intelligence itself. We are still struggling to define what intelligence is and how to measure it in artificial and natural beings.

“It’s clear that to make and assess progress in AI more effectively, we will need to develop a better vocabulary for talking about what machines can do,” Mitchell writes. “And more generally, we will need a better scientific understanding of intelligence as it manifests in different systems in nature.”

Another challenge that Mitchell discusses in her paper is that of common sense, which she describes as “a kind of umbrella for what’s missing from today’s state-of-the-art AI systems.”

Common sense includes the knowledge that we acquire about the world and apply it every day without much effort. We learn a lot without being explicitly instructed, by exploring the world when we are children. These include concepts such as space, time, gravity, and the physical properties of objects. For example, a child learns at a very young age that when an object becomes occluded behind another, it has not disappeared and continues to exist, or when a ball rolls across a table and reaches the ledge, it should fall off. We use this knowledge to build mental models of the world, make causal inferences, and predict future states with decent accuracy.

This kind of knowledge is missing in today’s AI systems, which makes them unpredictable and data-hungry. In fact, housekeeping and driving, the two AI applications mentioned at the beginning of this article, are things that most humans learn through common sense and a little bit of practice.

Common sense also includes basic facts about human nature and life, things that we omit in our conversations and writing because we know our readers and listeners know them. For example, we know that if two people are “talking on the phone,” it means that they aren’t in the same room. We also know that if “John reached for the sugar,” it means that there was a container with sugar inside it somewhere near John. This kind of knowledge is crucial to areas such as natural language processing.

“No one yet knows how to capture such knowledge or abilities in machines. This is the current frontier of AI research, and one encouraging way forward is to tap into what’s known about the development of these abilities in young children,” Mitchell writes.

While we still don’t know the answers to many of these questions, a first step toward finding solutions is being aware of our own erroneous thoughts. “Understanding these fallacies and their subtle influences can point to directions for creating more robust, trustworthy, and perhaps actually intelligent AI systems,” Mitchell writes.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with