Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on July 29, 2019

Hype is killing AI – here’s how we can stop it


Hype is killing AI – here’s how we can stop it

Mystified and vilified at the same time. That’s how I would currently describe “artificial intelligence,” one of the most feared, revered and hated buzzwords of the tech industry.

I was reminded of this fact earlier this week, when I stumbled on an interesting Medium post by Mike Mallazzo. Titled “The BS-Industrial Complex of Phony A.I.,” the article shed light on how product vendors rebrand rudimentary technologies as artificial intelligence to cause hype and excitement—and attract more capital.

(Unfortunately, the post was removed from Medium, but a cached version still exists if you Google it.)

“For the last few years, startups have shamelessly re-branded rudimentary machine-learning algorithms as the dawn of the singularity, aided by investors and analysts who have a vested interest in building up the hype. Welcome to the artificial intelligence bullshit-industrial complex,” writes Mallazzo, who previously worked at Dynamic Yield, a company recently acquired by McDonalds.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Mallazzo’s words are very true. The AI industry is currently rushing toward the peak of its latest hype cycle, creating growing incentive for tech startups to put an AI label on their technologies for no other reason than to jump on the AI bandwagon. The practice has created confusion and frustration around what AI is and what it can do, and has given rise to a growing group of disenchanted experts, scientists and academicians.

How AI became mystified

Interestingly, a little over a decade ago, artificial intelligence was an unpopular term. During the AI winter, companies intentionally refrained from associating themselves with artificial intelligence and used other terms to describe their technologies.

In the past few years, advances in deep learning and artificial neural networks have renewed interest in AI. In 2018, more than 3,500 AI papers were published in the arXiv preprint processor. To put that in perspective, in 2008, the figure stood at 277.

And where innovation sprouts, money follows. A quick search on statistic sites like Statista shows revenue and investment in artificial intelligence growing at an accelerating pace. Consultancy firm PricewaterhouseCoopers estimates AI to be worth $15 trillion by 2030.

Under such circumstances, it’s natural for tech companies find all sorts of ways to “leverage AI.” Survey after survey shows that more and more companies “implementing AI” or planning to do it in some way.

But in many—if not most—cases, the approach to artificial intelligence is only in name. In Europe alone, a recent study by London-based venture capital firm MMC found that out of 2,830 startups classified as AI companies, only 1,580 accurately fit the description. (Interestingly Mallazzo points out in his article that VC firms are also complicit in mystifying AI.)

Part of the problem is with the term “artificial intelligence” itself, which is vague in nature, and its definition changes as time passes and technology improves. So it’s easy for marketers to get away with renaming old technology as AI.

Another problem is that current blends of AI have become very good at some tasks that were previously thought to be off-limits for computers, such as playing complicated board games,  diagnose cancer, generate coherent text, and (almost) drive cars. Moreover, current AI models work in complicated ways that are often difficult to interpret even by their own creators.

All of this has created a tendency to compare current blends of AI to human intelligence or to give human characteristics to software systems and hardware that use technologies such as neural networks. You have probably heard of “racist algorithms,” “smart gadgets,” and “creepy AI algorithms that create their own language.”

The hype is also creating a growing demand for news stories and articles about artificial intelligence. Tech publications often rush to report on the latest AI paper without bothering to understand the underlying technology. In many cases, the people who write about the technology don’t have the minimum understanding of fundamental AI concepts such as machine learning and neural networks.

The vague reporting is often accompanied by clickbait headlines and images of crystal balls and killer robots, making AI even more confusing.

What’s most disappointing however, is to see reputable research labs fuel the confusion by making questionable claims about the capabilities and threats of their AI models. Even mathematicians who should know better get caught up in the mystical haze and anthropomorphize AI.

A reality check on AI

The mystification of AI fully justifies the backlash by the science community. Mallazzo’s post is just one of the many articles that denounce the way companies make vague use of the AI nomenclature to sell their products.

I personally agree with most of the arguments in these articles. Misusing the terminology for the sake of creating excitement and drawing attention and investment to products and services is deplorable, and it certainly hurts the industry. It also casts a shroud of doubt on all the great work being done in the field to apply AI to real problems.

However, we also need to acknowledge the problem and find a solution to the current situation to better inform the public and help people make informed decisions.

First, we must recognize that artificial intelligence is a fluid term, whose definition changes with time. Therefore, we need to define what is the current context of AI. Can we consider anything that uses a machine learning algorithm as artificial intelligence? Should AI be limited to systems that employ neural networks and deep learning algorithms?

Or maybe we should evaluate AI based on the cognitive behavior a system manifests, regardless of the underlying technology? If yes, what is the minimum level of cognitive accomplishment for a system to be considered AI?

The answers to all these questions and many others will help us narrow down the contemporary definition of AI and give us guidelines to evaluate companies and technologies.

Rationally, we would expect everyone to avoid using “artificial intelligence” and instead employ the more specific terms that describe their technologies. But this might make things too complicated. The AI landscape is a hodgepodge of various technologies, and it would be unfair to expect everyone to educate themselves on the different types of techniques used in the field.

Therefore, we will still need the umbrella term “artificial intelligence” to describe the space in general. However, startups and research labs need to abandon practices that confuse their audience. Instead of trying to wrap their products in an aura of magic and mystery, they should try to explain in the most understandable way how their AI works.

And here’s my personal recommendation to writers and news organizations. Given the current hype surrounding AI, there’s nothing wrong with using “artificial intelligence” in headlines. After all, that’s what readers are looking for, and the average reader is more likely to understand “AI algorithm” than terms such as “transformer” or “autoencoder” or “recurrent neural network.”

But writers also have a responsibility to clarify as much as they can what the underlying technology is and how it works. This will help readers develop a realistic picture of the capabilities and limits of current artificial intelligence technologies.

As for the scientists and researchers who are concerned with the current state of affairs, they should become more engaged in the public discussion to separate fact from fiction. Articles such as Mallazzo’s are helpful, but vilifying the misuse of AI terminology is not enough. There should also be efforts in making AI understandable to the less tech-savvy audience. You shouldn’t have a computer science degree to know that neural networks are not magic.

In the past seven decades, overpromising and underdelivering by scientists and researchers triggered AI winters at different stages. Today, the industry faces a similar threat from actors who are more concerned with their short-lived gains than the long-term benefits the technology can bring to humanity. Protecting AI from another quasi-winter is a collective duty that requires responsible behavior from all the people involved.

This story is republished from TechTalks, the blog that explores how technology is solving problems… and creating new ones. Like them on Facebook here and follow them on Twitter.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with