Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on July 3, 2018

A beginner’s guide to AI: Neural networks


A beginner’s guide to AI: Neural networks

Welcome to Neural Basics, a collection of guides and explainers to help demystify the world of artificial intelligence.

One of the more complex and misunderstood topics making headlines lately is artificial intelligence. People like Elon Musk warn that robots could one day destroy us all, while other experts claim that we’re on the brink of an AI winter and the technology is going nowhere. Making heads or tails of it all is difficult, but the best place to start is with deep learning. Here’s what you need to know.

Artificial intelligence has become a focal point for the global tech community thanks to the rise of deep learning. The radical advance of computer vision and natural language processing, two of AI’s most important and useful functions, are directly related to the creation of artificial neural networks.

For the purpose of this article we’ll refer to artificial neural networks as, simply, neural networks. But, it’s important to know that deep learning techniques for computers are based on the brains of humans and other animals.

What is a neural network?

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Scientists believe that a living creature’s brain processes information through the use of a biological neural network. The human brain has as many as 100 trillion synapses – gaps between neurons – which form specific patterns when activated. When a person thinks about a specific thing, remembers something, or experiences something with one of their senses, it’s thought that specific neural patterns “light up” inside the brain.

Think of it like this: when you were learning to read you might have had to sound out the letters so that you could hear them out loud and lead your young brain to a conclusion. But, once you’ve read the word cat enough times you don’t have to slow down and sound it out. At this point, you access a part of your brain more associated with memory than problem-solving, and thus a different set of synapses fire because you’ve trained your biological neural network to recognize the word “cat.”

In the field of deep learning a neural network is represented by a series of layers that work much like a living brain’s synapses. We know that researchers teach computers how to understand what a cat is – or at least what a picture of a cat is – by feeding it as many images of cats as they can. The neural network takes those images and tries to find out everything that makes them similar, so that it can find cats in other pictures.

Scientists use neural networks to teach computers how to do things for themselves. Here are a few examples of what neural networks do:

As you can see neural networks tackle a wide variety of problems. In order to understand how they work – and how computers learn – let’s take a closer look at three basic kinds of neural network.

There are many different kinds of deep learning and several types of neural network, but we’ll be focusing on generative adversarial networks (GANs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs).

Generative adversarial network

First up, the GAN. Ian Goodfellow, one of Google’s AI gurus, invented the GAN in 2014. To put it in laymen’s terms, a GAN is a neural network comprised of two arguing sides — a generator and an adversary — that fight among themselves until the generator wins. If you wanted to create an AI that imitates an art style, like Picasso’s for example, you could feed a GAN a bunch of his paintings.

One side of the network would try to create new images that fooled the other side into thinking they were painted by Picasso. Basically, the AI would learn everything it could about Picasso’s work by examining the individual pixels of each image. One side of it would start creating an image while the other determined if it was a Picasso. Once the AI fooled itself, the results could then be viewed by a human who could determine if the algorithm needed to be tweaked to provide better results, or if it successfully imitated the desired style.

GANs are used in a wide variety of AI, including this amazing GAN built by Nvidia that creates people out of thin air.

Convolutional neural network

CNNs, not to be confused with the news outlet, are convolutional neural networks. These networks, at least in theory, have been around since the 1940s, but thanks to advanced hardware and efficient algorithms they’re just now becoming useful. Where a GAN tries to create something that fools an adversary, a CNN has several layers through which data is filtered into categories. These are primarily used in image recognition and text language processing.

If you’ve got a billion hours of video to sift through, you could build a CNN that tries to examine each frame and determine what’s going on. One might train a CNN by feeding it complex images that have been tagged by humans. AI learns to recognize things like stop signs, cars, trees, and butterflies by looking at pictures that humans have labelled, comparing the pixels in the image to the labels it understands, and then organizing everything it sees into the categories it’s been trained on.

CNNs are among the most common and robust neural networks. Researchers use them for a myriad of things, including outperforming doctors in diagnosing some diseases.

Recurrent neural network

Finally we have the RNN, or recurrent neural network. RNNs are primarily used for AI that requires nuance and context to understand its input. An example of such a neural network is a natural language processing AI that interprets human speech. One need look no further than Google’s Assistant and Amazon’s Alexa to see an example of an RNN in action.

To understand how an RNN works, let’s imagine an AI that generates original musical compositions based on human input. If you play a note the AI tries to ‘hallucinate’ what the next note ‘should’ be. If you play another note, the AI can further anticipate what the song should sound like. Each piece of context provides information for the next step, and a RNN continuously updates itself based on its continuing input – hence the recurrent part of the name.

Go deeper

There are at least a dozen other kinds of neural network, and the three covered here are much more nuanced than the scope of this article. But, if you’ve made it this far, you should have a conversational understanding of what neural networks are, and what they do. If you’d like to know more here’s a few suggestions to take your machine learning education to the next level:

  • This free AI course from the University of Helsinki and Reaktor
  • Some free tutorials for using Google’s open source AI platform TensorFlow
  • And this excellent article on The Ganfather by MIT’s Martin Giles.

And for all the biggest news on what’s happening in the world of neural networks, check out our artificial intelligence section.

Get the TNW newsletter

Get the most important tech news in your inbox each week.