A while ago, while browsing through the latest AI news, I stumbled upon a company that claimed to use “machine learning and advanced artificial intelligence” to collect and analyze hundreds of data touch points to improve user experience in mobile apps.
On the same day, I read about another company that predicted customer behavior using “a combination of machine learning and AI” and “AI-powered predictive analytics.”
(I will not name the companies to avoid shaming them, because I believe their products solve real problems, even if they’re marketing it in a deceptive way.)
There’s much confusion surrounding artificial intelligence and machine learning. Some people refer to AI and machine learning as synonyms and use them interchangeably, while other use them as separate, parallel technologies.
In many cases, the people speaking and writing about the technology don’t know the difference between AI and ML. In others, they intentionally ignore those differences to create hype and excitement for marketing and sales purposes.
As with the rest of this series, in this post, I’ll (try to) disambiguate the differences between artificial intelligence and machine learning to help you distinguish fact from fiction where AI is concerned.
We know what machine learning is
We’ll start with machine learning, which is the easier part of the AI vs ML equation. Machine learning is a subset of artificial intelligence, just one of the many ways you can perform AI.
Machine learning relies on defining behavioral rules by examining and comparing large data sets to find common patterns. This is an approach that is especially efficient for solving classification problems.
For instance, if you provide a machine learning program with a lot of x-ray images and their corresponding symptoms, it will be able to assist (or possibly automate) the analysis of x-ray images in the future.
The machine learning application will compare all those different images and find what are the common patterns found in images that have been labeled with similar symptoms. And when you provide it with new images it will compare its contents with the patterns it has gleaned and tell you how likely the images contain any of the symptoms it has studied before.
This type of machine learning is called “supervised learning,” where an algorithm trains on human-labeled data. Unsupervised learning, another type of ML, relies on giving the algorithm unlabeled data and letting it find patterns by itself.
For instance, you provide an ML algorithm with a constant stream of network traffic and let it learn by itself what is the baseline, normal network activity and what are the outlier and possibly malicious behavior happening on the network.
Reinforcement learning, the third popular type of machine learning algorithm, relies on providing an ML algorithm with a set of rules and constraints and let it learn by itself how to best achieve its goals.
Reinforcement learning usually involves a sort of reward, such as scoring points in a game or reducing electricity consumption in a facility. The ML algorithm tries its best to maximize its rewards within the constraints provided. Reinforcement learning is famous in teaching AI algorithms to play different games such as Go, poker, StarCraft and Dota.
Machine learning is fascinating, especially it’s more advanced subsets such as deep learning and neural networks. But it’s not magic, even if we sometimes have problem discerning its inner workings.
At its heart, ML is the study of data to classify information or to predict future trends. In fact, while many like to compare deep learning and neural networks to the way the human brain works, there are huge differences between the two.
Bottom line: We know what machine learning is. It’s a subset of artificial intelligence. We also know what it can and can’t do.
We don’t exactly know what AI is
On the other hand, the term “artificial intelligence” is very broad in scope. According to Andrew Moore, Dean of Computer Science at Carnegie Mellon University, “Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.”
This is one of the best ways to define AI in a single sentence, but it still shows how broad and vague the field is. For instance, “until recently” is something that changes with time.
Several decades ago, a pocket calculator would be considered AI, because calculation was something that only the human brain could perform. Today, the calculator is one of the dumbest applications you’ll find on every computer.
As Zachary Lipton, the editor of Approximately Correct explains, the term AI “is aspirational, a moving target based on those capabilities that humans possess but which machines do not.”
AI also encompasses a lot of technologies that we know. Machine learning is just one of them. Earlier works of AI used other methods such as good old-fashioned AI (GOFAI), which is the same if-then rules that we use in other applications. Other methods include A*, fuzzy logic, expert systems and a lot more.
Deep Blue, the AI that defeated the world’s chess champion in 1997, used a method called tree search algorithms to evaluate millions of moves at every turn.
A lot of the references made to AI pertain to general AI, or human-level intelligence. That is the kind of technology you see in sci-fi movies such as Matrix or 2001: A Space Odyssey.
But we still don’t know how to create artificial intelligence that is on par with the human mind, and deep learning, the most advance type of AI, can rival the mind of a human child, let alone an adult. It is perfect for narrow tasks, not general, abstract decisions, which isn’t a bad thing at all.
AI as we know it today is symbolized by Siri and Alexa, by the freakishly precise movie recommendation systems that power Netflix and YouTube, by the algorithms hedge funds use to make micro-trades that rake in millions of dollars every year.
These technologies are becoming increasingly important in our daily lives. In fact, they are the augmented intelligence technologies that enhance our abilities and making us more productive.
Bottom line: Unlike machine learning, AI is a moving target, and its definition changes as its related technologies become more advanced. What is an isn’t AI can easily be contested, which machine learning is very clear-cut in its definition. Maybe in a few decades, today’s cutting edge AI technologies will be considered as dumb and dull as calculators are to us right now.
So if we go back to the examples mentioned at the beginning of the article, what does “machine learning and advanced AI” actually mean? After all, aren’t machine learning and deep learning the most advanced AI technologies currently available? And what does “AI-powered predictive analytics” mean? Doesn’t predictive analytics use machine learning, which is a branch of AI anyway?
Why do tech companies like to use AI and ML interchangeably?
Since the term “artificial intelligence” was coined, the industry has gone through many ups and downs. In the early decades, there was a lot of hype surrounding the industry, and many scientists promised that human-level AI was just around the corner.
But undelivered promises caused a general disenchantment with the industry and led to the AI winter, a period where funding and interest in the field subsided considerably.
Afterwards, companies tried to dissociate themselves with the term AI, which had become synonymous with unsubstantiated hype, and used other terms to refer to their work. For instance, IBM described Deep Blue as a supercomputer and explicitly stated that it did not use artificial intelligence, while technically it did.
During this period, other terms such as big data, predictive analytics and machine learning started gaining traction and popularity. In 2012, machine learning, deep learning and neural networks made great strides and started being used in an increasing number of fields. Companies suddenly started to use the terms machine learning and deep learning to market their products.
Deep learning started to perform tasks that were impossible to do with rule-based programming. Fields such as speech and face recognition, image classification and natural language processing, which were at very crude stages, suddenly took great leaps.
And that is perhaps why we’re seeing a shift back to AI. For those who had been used to the limits of old-fashioned software, the effects of deep learning almost seemed magic, especially since some of the fields that neural networks and deep learning are entering were considered off limits for computers.
Machine learning and deep learning engineers are earning 7-digit salaries, even when they’re working at non-profits, which speaks to how hot the field is.
Add to that the misguided description of neural networks, which claim that the structure mimics the working of the human brain, and you suddenly have the feeling that we’re moving toward artificial general intelligence again. Many scientists (Nick Bostrom, Elon Musk…) started warning against an apocalyptic near-future, where super intelligent computers drive humans into slavery and extinction. Fears of technological unemployment resurfaced.
All these elements have helped reignite the excitement and hype surrounding artificial intelligence. Therefore, sales departments find it more profitable to use the vague term AI, which has a lot of baggage and exudes a mystic aura, instead of being more specific about what kind of technologies they employ. This helps them oversell or remarket the capabilities of their products without being clear about their limits.
Meanwhile, the “advanced artificial intelligence” that these companies claim to use is usually a variant of machine learning or some other known technology.
Unfortunately, this is something that tech publications often report without deep scrutiny, and they often accompany AI articles with images of crystal balls, and other magical representations.
This will help those companies generate hype around their offerings. But down the road, as they fail to meet the expectations, they are forced to hire humans to make up for the shortcomings of their AI. In the end, they might end up causing mistrust in the field and trigger another AI winter for the sake of short-lived gains.
This story is republished from TechTalks, the blog that explores how technology is solving problems… and creating new ones. Like them on Facebook here and follow them down here:
Get the TNW newsletter
Get the most important tech news in your inbox each week.