Celebrate King's Day with TNW 🎟 Use code GEZELLIG40 on your Business, Investor and Startup passes today! This offer ends on April 29 →

This article was published on January 5, 2018

That’s not how any of this works: Optimistic tech reporting isn’t going to ruin AI


That’s not how any of this works: Optimistic tech reporting isn’t going to ruin AI Image by: flattop341

Winter is coming, but fear not. According to the experts, we won’t need Jon Snow to save us — it’s only coming for our machines.

Gary Marcus, an AI expert and Professor of Psychology at NYU, Tuesday published a fascinating white-paper. It basically serves as a list of reasons why he thinks deep learning is shit and the community should abandon it and start over. Which is something he and others seem to firmly believe:

In his recently published work Marcus posits (at number 5 on his list of hits against the field) that optimists in the media may be to blame for an impending AI winter (a period in which development is shuttered due to lack of interest):

When a high-profile figure like Andrew Ng writes in the Harvard Business Review promising a degree of imminent automation that is out of step with reality, there is fresh risk for seriously dashed expectations

By the numbers, this is 10 percent of the reason this guy thinks we should all reconsider the idea of deep learning. He goes on:

Machines cannot in fact do many things that ordinary humans can do in a second, ranging from reliably comprehending the world to understanding sentences. No healthy human being would ever mistake a turtle for a rifle or parking sign for a refrigerator.

And boy-howdy is he right. We recently pointed out that machines are so stupid they have a hard time figuring out how to jump in “Super Mario Bros.” – and there’s only two buttons!

Seriously, however, it feels a lot like some of these experts are confusing optimism with mis-managed expectations.

That same aforementioned video-gaming AI, MarI/O, eventually solves the puzzle and figures out how to jump. Which is the important distinction here: there isn’t anyone claiming they’ve created the “penultimate deep learning method.” That would be ridiculous, and it’s not far-fetched to think the field of deep learning is just getting started.

Marcus isn’t alone, by the way. Some experts think deep learning is just a clever trick that’s distracting from any actual pursuit of “artificial intelligence,” and therein lies the rub.

For all intents and purposes the AI community at-large is becoming split into a “general artificial intelligence or bust” side which opposes the “it doesn’t have to be as intelligent as humans to be AI” one.

From this point of view, the biggest problem with deep learning isn’t its limitations; it’s not a lying tech media that doesn’t know what it’s talking about; and it certainly isn’t a lack of vision on the part of developers. The problem is semantics.

There may never be a machine capable of thought or emotion — or general artificial intelligence (the idea that AI will reason with the same abilities as a human). And nobody seems to be pushing a rhetoric that deep learning is how we’ll get there if one emerges.

It seems silly to spread fear, uncertainty, and doubt about an entire branch of research because it won’t serve the ends that some experts have in mind.

Not everyone agrees — other experts aren’t all that impressed with the assertions Marcus makes:

It’s understandable that experts are concerned a branch of research is pulling resources away from what’s really important, or that excessive hyperbole could give investors worrisome expectations.

But relax Gary, there’s enough science to go around.

H/t MIT Technology Review.

Get the TNW newsletter

Get the most important tech news in your inbox each week.