You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on February 19, 2020

Study: AI expert Gary Marcus explains how to take AI to ‘the next level’


Study: AI expert Gary Marcus explains how to take AI to ‘the next level’

The field of AI, especially in the realm of deep learning, is at an inflection point. We’re either going to break on through to the other side – where deep learning becomes deep understanding – or continue spinning our collective wheels pouring trillions of dollars worth of compute into making Alexa a fraction of a percent better at pretending it understands what you’re saying.

That’s a trite summation for what’s happening, but according to Gary Marcus, the CEO and co-founder of Robust.AI, AI developers and researchers will need to augment their approach before any real progress towards “robust” artificial intelligence can be made.

Read: UK plan to replace migrant carers with automation branded ‘ridiculous’

Marcus published a new paper on arXiv earlier this week titled “The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence.” In the 55-page document he sums up and expands upon his recent arguments during the 2019 “AI Debate” between himself and Yoshua Bengio.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The gist of what Marcus is saying is summed up in a single quote he attributes to members of the Facebook AI team:

A growing body of evidence shows that state-of-the-art models learn to exploit spurious statistical patterns in datasets… instead of learning meaning in the flexible and generalizable way that humans do.

In other words, like a chicken playing tic-tac-toe, AI doesn’t have the slightest clue what it’s doing. It’s just modifying and repeating whatever it was programmed to do until a human decides the “parameters” for its behavior are properly adjusted.

Marcus argues that AI has no actual understanding because it doesn’t have an internal model of the world and how it and the objects in it function as humans do. The prescription, he says, is a hybrid developmental paradigm that combines deep learning with a cognitive model approach. He writes:

We must refocus, working towards developing a framework for building systems that can routinely acquire, represent, and manipulate abstract knowledge, using that knowledge in the service of building, updating, and reasoning over complex, internal models of the external world.

This approach is a departure from the current pie-in-the-sky efforts of numerous startups, big tech companies, and organizations who’ve dedicated their work to creating “Artificial General Intelligence,” or super-human AI.

Marcus, instead, advocates for a developmental restructuring that incorporates an achievable middle-ground involving the “next level of AI” before we get to the far-off age of superintelligent machines. To this end, he writes:

Let us call that new level robust artificial intelligence: intelligence that, while not necessarily superhuman or self-improving, can be counted on to apply what it knows to a wide range of problems in a systematic and reliable way, synthesizing knowledge from a variety of sources such that it can reason flexibly and dynamically about the world, transferring what it learns in one context to another, in the way that we would expect of an ordinary adult.

The meat of the problem is that deep learning is not a very good approximation for human reasoning. Anyone who’s ever fumbled through several different commands before landing on the right one to “trigger” the proper response from a smart speaker has dealt with AI’s inability to “understand.”

When Google Assistant or Alexa fails to process a command that makes sense but doesn’t use the right phrasing, it’s reacting no differently than if we’d pushed the wrong button on a touch pad: there’s no sense or intelligence there.

We’ve said before that most AI is either just an output funnel for vast amounts of data or prestidigitation akin to a magician making it appear as though they’d pulled a robot out of their hat. The truth is that Alexa, that GPT-2 text generator everyone’s scared of, and Telsa’s Autopilot system are all one-trick ponies.

Even Deep Mind’s AlphaGo, the computer that beat the world’s greatest game players at, arguably, the world’s toughest game, would get its ass kicked in a game of Monopoly or Scrabble unless someone took the time to completely retrain it.

Marcus insists that we need “an intelligence framed around enduring, abstract knowledge” if we’re to move artificial constructs forward toward human-level reasoning. Throughout history there are tales of scientists gleaning inspiration from unrelated events – Newton supposedly pondered gravity after wondering why apples fell straight down and Velcro was allegedly invented after an engineer went hiking and got cockle-burrs stuck to their pants.

The point is, AI doesn’t have inspiration or the ability to gather abstract information for unspecified distribution across future learning domains. And, until it does, we’re pretty far away from having “robust AI,” and much, much further from “human-level” or “superintelligent” machines.

For more information read the full paper on arXiv here, and check out “Rebooting AI” by Gary Marcus and Ernest Davis.


You’re here because you want to learn more about artificial intelligence. So do we. So this summer, we’re bringing Neural to TNW Conference 2020, where we will host a vibrant program dedicated exclusively to AI. With keynotes by experts from companies like Spotify and RSA, our Neural track will take a deep dive into new innovations, ethical problems, and how AI can transform businesses. Get your early bird ticket and check out the full Neural track.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top