This article was published on May 30, 2018

Why the leap to general AI still can’t happen yet

Are we close to the AI singularity? Probably not; here's why.


Why the leap to general AI still can’t happen yet

There’s a massive gap in the world of artificial intelligence (AI), and we haven’t been able to cross it successfully. Some developers refer to it as a move from “narrow” to “general” AI, while others describe it as a move from “weak” to “strong” AI.

However you describe it, the premise is approachable, even for non-experts. Most of the AI we use on a daily basis is narrow, meaning it’s specifically programmed to accomplish one task or group of tasks. Google’s AlphaGo, for example, is rooted in a deep learning framework that could be applied to many tasks, but it is specifically trained to excel in the game of go. It can’t play chess, categorize images, or drive a car.

Credit: The Sociable

General AI would, effectively, be able to handle any problem you threw at it. Some digital assistants are inching in this direction, moving from simple language recognition and search functionality to simple tasks, like booking reservations, but ultimately, they’re still narrow in scope.

Chances are, we’ll be stuck in these narrow functionalities for the foreseeable future—and maybe for decades to come.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The key challenges

Technological optimists like to envision a future where general AI not only exists, but assists us with practically everything we do in a day.

However, there are some key obstacles preventing us from achieving that reality:

1. Processing power and scale. Any data processing framework that relies on a significant volume of data, or has a high demand for performance, is going to demand expensive hardware to run it. Narrow AI already involves incredibly complex data processing, requiring thousands of computations to “learn” new things, and tons of memory to continue operations. It’s ridiculously expensive to build and maintain these machines, and a general AI program would require even more.

2. The course of learning. Machines don’t learn the same way humans do. A human might see a picture of a dove and call it a bird, then see a picture of a penguin and also describe it as a bird—even though it’s a very different creature. A machine, to make this distinction, would need to be guided with thousands of specific examples, and even then might get the categorization wrong. This is because humans are exceptional at drawing broad conclusions from small pieces of evidence, while computers need to learn everything, including very general concepts, from the ground up. Programmers can’t program the baseline assumptions or abstract reasoning necessary to accomplish this.

3. Encountering novel experiences. Narrow AI can become very skilled in one specific area, but even within that area, if it encounters something it’s never seen before, it can experience problems. In a general AI environment, all it takes is one novel experience—such as a new image, a new set of circumstances, or an unexpected change in the pattern—to disrupt the entire process.

4. Applying one framework to another. General AI also requires machines to take a framework learned in one set of circumstances and apply it to another. For example, if the AI learns the pattern of a conversation for making a dinner reservation at a restaurant, it would need to figure out the cues, words, and responses that might be appropriate for calling to make a doctor’s appointment—and the ones that are inappropriate. This dynamic shift is hard to pull off, to say the least.

What would a general AI need?

Credit: Intelligent Energy

At the risk of digging too far into semantics, let’s examine what we would need to achieve a “true” general AI:

1. Learning from a single example. A general AI should be able to learn at least a little bit from a single example of an interaction or task. Currently, narrow AI requires thousands, if not millions of examples before it gets an understanding of what to look for. Take image recognition as an example; even our best AI require in-depth training. An ideal general AI would be able to learn how to categorize a subject using just a handful of photos.

2. Abstract reasoning. General AI would also be able to “think”—at least in the abstract sense. It wouldn’t use thousands of tiny details to evaluate something; instead, it would apply general concepts and translate those concepts to solve various kinds of problems.

3. Short-term and long-term memory. General AI would also need some way to differentiate between long-term memory (which would include general concepts and assumptions about the world) and short-term memory (which it would need to complete immediate tasks). Understanding this distinction, and applying it correctly would be a major hurdle.

4. Cross-applications of knowledge. You can identify a penguin in a photograph, but what about a video or in real life? If you can learn to feed and care for a dog, would you be able to feed and care for a person? Cross-applications of knowledge are another hurdle; how can you “teach” an AI to instinctively know which concepts can be applied elsewhere, and which ones must be redrawn from scratch?

5. Management of multiple goals. General AI should also be able to manage multiple goals simultaneously. The real trouble here is when some of those goals contradict others—like what usually happens when trying to apply Asimov’s Laws.

6. Efficiency. On top of all these other requirements, the general AI would need to work highly efficiently—that means operating with minimal hardware, for a reasonable cost, and at a reasonable speed. For some applications, this is the biggest limiting factor.

As if the primary obstacles weren’t enough, this laundry list of requirements is enough to discourage even the most optimistic programmer.

Why we’re stuck here—probably for a while

With the sheer number of obstacles and requirements to meet to create a feasible general AI, it’s practically impossible to create one by taking steps forward with our current technology. This isn’t something we can achieve with a step-by-step progression. Instead, a general AI would take a total teardown of deep learning as we know it today. There have been several proposals for how we could move forward using alternative approaches, including cognitive architecture, but none have emerged as a clear frontrunner.

Until we have a viable path forward, we’re going to have to accept the limits of narrow AI, and attempt to work within them for our AI innovation.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with