This article was published on September 10, 2017

A glossary of basic artificial intelligence terms and concepts


A glossary of basic artificial intelligence terms and concepts

The ever-expanding field of Artificial Intelligence stands upon the precipice of a mainstream breakthrough. Whether AI-enhanced smartphones whip up the public frenzy or driverless cars get there first, it’s clear that we’re officially in the AI era.

Naysayers will point out AI isn’t new; researchers were diving into the idea of autonomous computing back in the 1950s. Today’s developers aren’t so different either, as what they’re doing is essentially what experts in the field have been working on for decades.

What’s changed is the raw computing power we have available now. Fifty years ago, scientists would have needed computers the size of Nevada to do what we today can do on chips the size of pennies. Perhaps clever architecture could have gotten it down to the size of a shopping mall, but you get the point.

As far as hardware is concerned we’ve arrived, and so have the robots.

But what does it all mean? Defining the nature of what AI is, and what it’s going to do for Joe Public, is difficult. Advances that will affect the entire world are often complex and take awhile before everyone understands what’s happening.

Remember trying to explain the internet to people in the 90s? There was a time, not all that long ago, when words like “bandwidth” and “router” weren’t common in the lexicon of your average person.

In the next few years everyone is going to want to understand some basic terms about AI, because you’ll be seeing it all over the place as every gadget made in the near-future is going to have some form of artificial intelligence baked in.

Artificial intelligence

The first thing we need to do is understand what an AI actually is. The term “artificial intelligence” refers to a specific field of computer engineering that focuses on creating systems capable of gathering data and making decisions and/or solving problems. An example of basic AI is a computer that can take 1000 photos of cats for input, determine what makes them similar, and then find photos of cats on the internet. The computer has learned, as best as it can, what a photo of a cat looks like and uses this new intelligence to find things that are similar.

Autonomous

Simply put, autonomy means that an AI construct doesn’t need help from people. Driverless cars illustrate the term “autonomous” in varying degrees. Level four autonomy represents a vehicle that doesn’t need a steering wheel or pedals: it doesn’t need a human inside of it to operate at full capacity. If we ever have a vehicle that can operate without a driver, and also doesn’t need to connect to any grid, server, GPS, or other external source in order to function it’ll have reached level five autonomy.

Anything beyond that would be called sentient, and despite the leaps that have been made recently in the field of AI, the singularity (an event representing an AI that becomes self-aware) is purely theoretical at this point.

Algorithm

The most important part of AI is the algorithm. These are math formulas and/or programming commands that inform a regular non-intelligent computer on how to solve problems with artificial intelligence. Algorithms are rules that teach computers how to figure things out on their own. It may be a nerdy construct of numbers and commands, but what algorithms lack in sex appeal they more than make up for in usefulness.

Machine learning

The meat and potatoes of AI is machine learning — in fact it’s typically acceptable to substitute the terms artificial intelligence and machine learning for one another. They aren’t quite the same, however, but connected.

Machine learning is the process by which an AI uses algorithms to perform artificial intelligence functions. It’s the result of applying rules to create outcomes through an AI.

Black box

When the rules are applied an AI does a lot of complex math. This math, often, can’t even be understood by humans (and sometimes it just wouldn’t be worth the time it would take for us to figure it all out) yet the system outputs useful information. When this happens it’s called black box learning. The real work happens in such a way that we don’t really care how the computer arrived at the decisions it’s made, because we know what rules it used to get there. Black box learning is how we can ethically skip “showing our work” like we had to in high school algebra.

Neural network

When we want an AI to get better at something we create a neural network. These networks are designed to be very similar to the human nervous system and brain. It uses stages of learning to give AI the ability to solve complex problems by breaking them down into levels of data. The first level of the network may only worry about a few pixels in an image file and check for similarities in other files. Once the initial stage is done, the neural network will pass its findings to the next level which will try to understand a few more pixels, and perhaps some metadata. This process continues at every level of a neural network.

Deep learning

Deep learning is what happens when a neural network gets to work. As the layers process data the AI gains a basic understanding. You might be teaching your AI to understand cats, but once it learns what paws are that AI can apply that knowledge to a different task. Deep learning means that instead of understanding what something is, the AI begins to learn “why.”

Natural language processing

It takes an advanced neural network to parse human language. When an AI is trained to interpret human communication it’s called natural language processing. This is useful for chat bots and translation services, but it’s also represented at the cutting edge by AI assistants like Alexa and Siri.

Reinforcement learning

AI is a lot more like humans than we might be comfortable believing. We learn in almost the exact same way. One method of teaching a machine, just like a person, is to use reinforcement learning. This involves giving the AI a goal that isn’t defined with a specific metric, such as telling it to “improve efficiency” or “find solutions.” Instead of finding one specific answer the AI will run scenarios and report results, which are then evaluated by humans and judged. The AI takes the feedback and adjusts the next scenario to achieve better results.

Supervised learning

This is the very serious business of proving things. When you train an AI model using a supervised learning method you provide the machine with the correct answer ahead of time. Basically the AI knows the answer and it knows the question. This is the most common method of training because it yields the most data: it defines patterns between the question and answer.

If you want to know why something happens, or how something happens, an AI can look at the data and determine connections using the supervised learning method.

Unsupervised learning

In many ways the spookiest part of AI research is realizing that the machines are really capable of learning, and they’re using layers upon layers of data and processing capability to do so. With unsupervised learning we don’t give the AI an answer. Rather than finding patterns that are predefined like, “why people choose one brand over another,” we simply feed a machine a bunch of data so that it can find whatever patterns it is able to.

Transfer learning

Another spooky way machines can learn is through transfer learning. Once an AI has successfully learned something, like how to determine if an image is a cat or not, it can continue to build on it’s knowledge even if you aren’t asking it to learn anything about cats. You could take an AI that can determine if an image is a cat with 90-percent accuracy, hypothetically, and after it spent a week training on identifying shoes it could then return to its work on cats with a noticeable improvement in accuracy.

Turing Test

If you’re like most AI experts you’re cautiously optimistic about the future and you have reservations about our safety as we draw closer to robots that are indistinguishable from people.

Alan Turing shared your concerns. Though he died in 1954 his legacy lives on in two ways. Primarily he’s credited with cracking Nazi codes and helping the Allies win World War 2. He’s also the father of modern computing and the creator of the Turing Test.

While the test was originally conceived as a way of determining if a human could be fooled by a conversation, in text display only, between a human and an artificial intelligence, it has since become short hand for any AI that can fool a person into believing they’re seeing or interacting with a real person.

The field of AI research isn’t science fiction, although it is exciting and avant-garde. We’re on the brink of a change in civilization so huge that, according to experts like Oxford Professor Nick Bostrom, it represents a fundamental change in our trajectory as a species.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top