Winter is coming, but fear not. According to the experts, we won’t need Jon Snow to save us — it’s only coming for our machines.
Gary Marcus, an AI expert and Professor of Psychology at NYU, Tuesday published a fascinating white-paper. It basically serves as a list of reasons why he thinks deep learning is shit and the community should abandon it and start over. Which is something he and others seem to firmly believe:
Deep Learning is not enough, and we need to start over — Hinton confirms what I have been saying for two decades. https://t.co/9BxJYvd7oD
— Gary Marcus (@GaryMarcus) September 15, 2017
In his recently published work Marcus posits (at number 5 on his list of hits against the field) that optimists in the media may be to blame for an impending AI winter (a period in which development is shuttered due to lack of interest):
When a high-profile figure like Andrew Ng writes in the Harvard Business Review promising a degree of imminent automation that is out of step with reality, there is fresh risk for seriously dashed expectations
By the numbers, this is 10 percent of the reason this guy thinks we should all reconsider the idea of deep learning. He goes on:
Machines cannot in fact do many things that ordinary humans can do in a second, ranging from reliably comprehending the world to understanding sentences. No healthy human being would ever mistake a turtle for a rifle or parking sign for a refrigerator.
And boy-howdy is he right. We recently pointed out that machines are so stupid they have a hard time figuring out how to jump in “Super Mario Bros.” – and there’s only two buttons!
Seriously, however, it feels a lot like some of these experts are confusing optimism with mis-managed expectations.
Just read "Deep Learning: A Critical Appraisal" by @garymarcus. I love deep learning, but I agree. https://t.co/fjymf2OwmU
I want machines that can reason from first principles. E.g., Q: "Why does it only rain outside?" A: "Because a roof blocks the path to the ground."
— Jonathan Mugan (@jmugan) January 4, 2018
That same aforementioned video-gaming AI, MarI/O, eventually solves the puzzle and figures out how to jump. Which is the important distinction here: there isn’t anyone claiming they’ve created the “penultimate deep learning method.” That would be ridiculous, and it’s not far-fetched to think the field of deep learning is just getting started.
Marcus isn’t alone, by the way. Some experts think deep learning is just a clever trick that’s distracting from any actual pursuit of “artificial intelligence,” and therein lies the rub.
For all intents and purposes the AI community at-large is becoming split into a “general artificial intelligence or bust” side which opposes the “it doesn’t have to be as intelligent as humans to be AI” one.
From this point of view, the biggest problem with deep learning isn’t its limitations; it’s not a lying tech media that doesn’t know what it’s talking about; and it certainly isn’t a lack of vision on the part of developers. The problem is semantics.
There may never be a machine capable of thought or emotion — or general artificial intelligence (the idea that AI will reason with the same abilities as a human). And nobody seems to be pushing a rhetoric that deep learning is how we’ll get there if one emerges.
It seems silly to spread fear, uncertainty, and doubt about an entire branch of research because it won’t serve the ends that some experts have in mind.
Not everyone agrees — other experts aren’t all that impressed with the assertions Marcus makes:
Disappointing article by @GaryMarcus. He barely addresses the accomplishments of deep learning (eg NL translation) and minimizes others (eg ImageNet with 1000 categories is small ("very finite") ?). 1/ https://t.co/QIjtPAaAkD
— Thomas G. Dietterich (@tdietterich) January 4, 2018
It’s understandable that experts are concerned a branch of research is pulling resources away from what’s really important, or that excessive hyperbole could give investors worrisome expectations.
But relax Gary, there’s enough science to go around.