You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on June 13, 2018

What happens when the AI bubble bursts?


What happens when the AI bubble bursts?

AI experts are worried the field is on the brink of a scenario similar to the dotcom bubble bursting. It’s called an AI winter. And, if it happens, it could leave a lot of researchers, investors, and entrepreneurs out in the cold.

Such a scenario could happen for a number reasons, and its effects could vary wildly depending on how poorly the investments in the space end up performing. But before we dive into all of that, it’s important to understand that there’s no official Bubble Czar out there determining when it’s time to head for the lifeboats.

The problem with bubbles is you can never tell when they’re going to burst – or even if you’re in one. But in hindsight, it’s usually pretty easy to see why they happen. In this case, much like the dotcom one, an AI bubble happens because of excessive speculation.

Not only are venture capitalists (VCs) throwing money at anyone who so much as mumbles the words “neural” and “network” in the same sentence, but companies such as Google and Microsoft are re-branding themselves as businesses focused on AI.

The experts at Gartner predict “AI-derived business” will be worth 3.2 trillion by 2022 – more than the film, video game, and music industries combined. Simply put, that’s more than a fair heaping of speculation.

In order to understand what would happen if such a giant bubble burst, we need to go a little further back than the dotcom bubble burst of 2000.

There was an AI winter – which is just another way of saying AI bubble – in the 1980s. Many of the breakthroughs we’ve experienced in the past few years, in areas such as computer vision and neural networks, were promised by researchers during ‘the golden years’ of AI, a period from the mid 1950’s to the late 1970’s.

Today researchers like Ian Goodfellow and Yann LeCun push the envelope when it comes to deep learning techniques. But much of what they and their colleagues do now continues promising work from decades ago. Work which was abandoned due to a lack of interest from researchers and funding from investors.

And it’s not just cutting-edge researchers who need worry. In fact, they may initially be the safest. Google Chief Cloud Researcher Dr. Fei Fei Li will probably find work in all but the coldest of AI winters, but the graduating class of 2023 might not find themselves so lucky. In fact, researchers at university could be the first to suffer – when the AI funding dries up it’ll probably effect Stanford’s research department before Microsoft’s.

So how do we know if an AI winter is coming? The short answer: we don’t, so suck it up and sally-forth. But the long answer is, we take a look at the factors that can cause one.

Microsoft researcher Dr. John Langford makes the case for an impending AI winter through the following observations:

  1. NIPS submission are up 50% this year to 4800 papers.
  2. There is significant evidence that the process of reviewing papers in machine learning is creaking under several years of exponentiating growth.
  3. Public figures often overclaim the state of AI.
  4. Money rains from the sky on ambitious startups with a good story.
  5. Apparently, we now even have a fake conference website (https://nips.cc/ is the real one for NIPS).

Some of these seem like pretty big deals – the uptake in NIPS submissions indicates a flood of research, it’s been speculated that low-quality research is beginning to slip the through cracks, and there’s been a lot of rigamarole over the role that tech celebrities and journalists play in causing an AI winter through excessive hyperbole.

His fourth point, if I can editorialize, is probably that an AI winter will be the direct result of investors clamming up after they don’t get the instant gratification most desire. A lot of these investors are dropping millions of dollars on startups that seem redundant in every way except the promises they make.

The fifth point seems more like a personal gripe, it’s unclear how a crappy scam affects the future of AI, but it is indicative that the NIPS conference is so popular that someone would try to rip off its attendees.

In a post on his personal blog, Dr. Langford goes on to say:

We are clearly not in a steady-state situation. Is this a bubble or a revolution? The answer surely includes a bit of revolution—the fields of vision and speech recognition have been turned over by great empirical successes created by deep neural architectures and more generally machine learning has found plentiful real-world uses. At the same time, I find it hard to believe that we aren’t living in a bubble.

So maybe we’re already in a bubble. What the hell are we supposed to do about it? According to Langford, it’s all about damage control. He advises that some research is more “bubbly” than others, and says researchers should focus on “intelligence creation” rather than “intelligence imitation.”

But the ramifications, this time around, may not be quite as severe as they were 40 years ago. It’s safe to say we’ve reached a sort of ‘save point’ in the field of AI. You could argue that some of the things promised by AI researchers could be far-fetched, artificial general intelligence for example, but for the most part machine learning has already provided solutions to previously unsolved problems.

I can’t imagine Google abandoning the AI that powers its Translate app, for example, unless something better than machine learning comes along to accomplish the task. And there are countless other examples of powerful AI being used all over the world at this very moment.

But, for VCs and entrepreneurs the best advice might still be: an ounce of evaluation is worth a pound of speculation.

Get the TNW newsletter

Get the most important tech news in your inbox each week.