
AI experts are worried the field is on the brink of a scenario similar to the dotcom bubble bursting. Itβs called an AI winter. And, if it happens, it could leave a lot of researchers, investors, and entrepreneurs out in the cold.
Such a scenario could happen for a number reasons, and its effects could vary wildly depending on how poorly the investments in the space end up performing. But before we dive into all of that, itβs important to understand that thereβs no official Bubble Czar out there determining when itβs time to head for the lifeboats.
The problem with bubbles is you can never tell when theyβre going to burst β or even if youβre in one. But in hindsight, itβs usually pretty easy to see why they happen. In this case, much like the dotcom one, an AI bubble happens because of excessive speculation.
Not only are venture capitalists (VCs) throwing money at anyone who so much as mumbles the words βneuralβ and βnetworkβ in the same sentence, but companies such as Google and Microsoft are re-branding themselves as businesses focused on AI.
The experts at Gartner predict βAI-derived businessβ will be worth 3.2 trillion by 2022 β more than the film, video game, and music industries combined. Simply put, thatβs more than a fair heaping of speculation.
In order to understand what would happen if such a giant bubble burst, we need to go a little further back than the dotcom bubble burst of 2000.
There was an AI winter β which is just another way of saying AI bubble β in the 1980s. Many of the breakthroughs weβve experienced in the past few years, in areas such as computer vision and neural networks, were promised by researchers during βthe golden yearsβ of AI, a period from the mid 1950βs to the late 1970βs.
Today researchers like Ian Goodfellow and Yann LeCun push the envelope when it comes to deep learning techniques. But much of what they and their colleagues do now continues promising work from decades ago. Work which was abandoned due to a lack of interest from researchers and funding from investors.
And itβs not just cutting-edge researchers who need worry. In fact, they may initially be the safest. Google Chief Cloud Researcher Dr. Fei Fei Li will probably find work in all but the coldest of AI winters, but the graduating class of 2023 might not find themselves so lucky. In fact, researchers at university could be the first to suffer β when the AI funding dries up itβll probably effect Stanfordβs research department before Microsoftβs.
So how do we know if an AI winter is coming? The short answer: we donβt, so suck it up and sally-forth. But the long answer is, we take a look at the factors that can cause one.
Microsoft researcher Dr. John Langford makes the case for an impending AI winter through the following observations:
- NIPS submission are up 50% this year to 4800 papers.
- There is significant evidence that the process of reviewing papers in machine learning is creaking under several years of exponentiating growth.
- Public figures often overclaim the state of AI.
- Money rains from the sky on ambitious startups with a good story.
- Apparently, we now even have a fake conference website (https://nips.cc/ is the real one for NIPS).
Some of these seem like pretty big deals β the uptake in NIPS submissions indicates a flood of research, itβs been speculated that low-quality research is beginning to slip the through cracks, and thereβs been a lot of rigamarole over the role that tech celebrities and journalists play in causing an AI winter through excessive hyperbole.
His fourth point, if I can editorialize, is probably that an AI winter will be the direct result of investors clamming up after they donβt get the instant gratification most desire. A lot of these investors are dropping millions of dollars on startups that seem redundant in every way except the promises they make.
The fifth point seems more like a personal gripe, itβs unclear how a crappy scam affects the future of AI, but it is indicative that the NIPS conference is so popular that someone would try to rip off its attendees.
In a post on his personal blog, Dr. Langford goes on to say:
We are clearly not in a steady-state situation. Is this a bubble or a revolution? The answer surely includes a bit of revolutionβthe fields of vision and speech recognition have been turned over by great empirical successes created by deep neural architectures and more generally machine learning has found plentiful real-world uses. At the same time, I find it hard to believe that we arenβt living in a bubble.
So maybe weβre already in a bubble. What the hell are we supposed to do about it? According to Langford, itβs all about damage control. He advises that some research is more βbubblyβ than others, and says researchers should focus on βintelligence creationβ rather than βintelligence imitation.β
But the ramifications, this time around, may not be quite as severe as they were 40 years ago. Itβs safe to say weβve reached a sort of βsave pointβ in the field of AI. You could argue that some of the things promised by AI researchers could be far-fetched, artificial general intelligence for example, but for the most part machine learning has already provided solutions to previously unsolved problems.
I canβt imagine Google abandoning the AI that powers its Translate app, for example, unless something better than machine learning comes along to accomplish the task. And there are countless other examples of powerful AI being used all over the world at this very moment.
But, for VCs and entrepreneurs the best advice might still be: an ounce of evaluation is worth a pound of speculation.
Get the TNW newsletter
Get the most important tech news in your inbox each week.