Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on January 2, 2020

2010 – 2019: The rise of deep learning


2010 – 2019: The rise of deep learning

No other technology was more important over the past decade than artificial intelligence. Stanford’s Andrew Ng called it the new electricity, and both Microsoft and Google changed their business strategies to become “AI-first” companies. In the next decade, all technology will be considered “AI technology.” And we can thank deep learning for that.

Deep learning is a friendly facet of machine learning that lets AI sort through data and information in a manner that emulates the human brain’s neural network. Rather than simply running algorithms to completion, deep learning lets us tweak the parameters of a learning system until it outputs the results we desire.

The 2019 Turing Award, given for excellence in artificial intelligence research, was awarded to three of deep learning’s most influential architects, Facebook’s Yann LeCun, Google’s Geoffrey Hinton, and University of Montreal’s Yoshua Bengio. This trio, along with many others over the past decade, developed the algorithms, systems, and techniques responsible for the onslaught of AI-powered products and services that are probably dominating your holiday shopping lists.

Credit: CS231N

Deep learning powers your phone’s face unlock feature and it’s the reason Alexa and Siri understand your voice. It’s what makes Microsoft Translator and Google Maps work. If it weren’t for deep learning, Spotify and Netflix would have no clue what you want to hear or watch next.

How does it work? It’s actually simpler than you might think. The machine uses algorithms to shake out answers like a series of sifters. You put a bunch of data in one side, it falls through sifters (abstraction layers) that pull specific information from it, and the machine outputs what’s basically a curated insight. A lot of this happens in what’s called the “black box,” a place where the algorithm crunches numbers in a way that we can’t explain with simple math. But since the results can be tuned to our liking, it usually doesn’t matter whether we can “show our work” or not when it comes to deep learning.

Deep learning, like all artificial intelligence technology, isn’t new. The term was brought to prominence in the 1980s by computer scientists. And by 1986 a team of researchers including Geoffrey Hinton managed to come up with a back propagation-based training method that tickled at the beginnings of an unsupervised artificial neural network. Scant a few years later a young Yann LeCun would train an AI to recognize handwritten letters using similar techniques.

Credit: Harvard Magazine

But, as those of us over 30 can attest, Siri and Alexa weren’t around in the late 1980s and we didn’t have Google Photos there to touch up our 35mm Kodak prints. Deep learning, in the useful sense we know it now, was still a long ways off. Eventually though, the next generation of AI superstars came along and put their mark on the field.

In 2009, the beginning of the modern deep learning era, Stanford’s Fei-Fei Li created ImageNet. This massive training dataset made it easier than ever for researchers to develop computer vision algorithms and directly lead to similar paradigms for natural language processing and other bedrock AI technologies that we take for granted now. This lead to an age of friendly competition that saw teams around the globe competing to see which could train the most accurate AI.

The fire was lit. By 2010 there were thousands of AI startups focused on deep learning and every big tech company from Amazon to Intel was completely dug in on the future. AI had finally arrived. Young academics with notable ideas were propelled from campus libraries to seven and eight figure jobs at Google and Apple. Deep learning was well on its way to becoming a backbone technology for all sorts of big data problems.

And then 2014 came and Apple’s Ian Goodfellow (then at Google) invented the generative adverserial network (GAN). This is a type of deep learning artificial neural network that plays cat-and-mouse with itself in order create an output that appears to be a continuation of its input.

Credit: Obvious

When you hear about an AI painting a picture, the machine in question is probably running a GAN that takes thousands or millions of images of real paintings and then tries to imitate them all at once. A developer tunes the GAN to be more like one style or another – so that it doesn’t spit out blurry gibberish – and then the AI tries to fool itself. It’ll make a painting and then compare the painting to all the “real” paintings in its dataset, if it can’t tell the difference then the painting passes. But if the AI “discriminator” can tell its own fake, it scraps that one and starts over. It’s a bit more complex than that, but the technology is useful in myriad circumstances.

Rather than just spitting out paintings, Goodfellow’s GANs are also directly behind DeepFakes and just about any other AI tech that seeks to blur the line between human-generated and AI-made.

In the five years since the GAN was invented, we’ve seen the field of AI rise from parlor tricks to producing machines capable of full-fledged superhuman feats. Thanks to deep learning, Boston Dynamics has developed robots capable of traversing rugged terrain autonomously, to include an impressive amount of gymnastics. And Skydio developed the world’s first consumer drone capable of truly autonomous navigation. We’re in the “safety testing” phase of truly useful robots, and driverless cars feel like they’re just around the corner.

Furthermore, deep learning is at the heart of current efforts to produce general artificial intelligence (GAI) – otherwise known as human-level AI. As most of us dream of living in a world where robot butlers, maids, and chefs attend to our every need, AI researchers and developers across the globe are adapting deep learning techniques to develop machines that can think. While it’s clear we’ll need more than just deep learning to achieve GAI, we wouldn’t be on the cusp of the golden age of AI if it weren’t for deep learning and the dedicated superheroes of machine learning responsible for its explosion over the past decade.

AI defined the 2010s and deep learning was at the core of its influence. Sure, big data companies have used algorithms and AI for decades to rule the world, but the hearts and minds of the consumer class – the rest of us – was captivated more by the disembodied voices that are our Google Assistant, Siri, and Alexa virtual assistants than any other AI technology. Deep learning may be a bit of a dinosaur, on its own, at this point. But we’d be lost without it.

The next ten years will likely see the rise of a new class of algorithm, one that’s better suited for use at the edge and, perhaps, one that harnesses the power of quantum computing. But you can be sure we’ll still be using deep learning in 2029 and for the foreseeable future.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with