This article was published on October 30, 2018

Intel’s AI strategy for 2019 goes beyond chips


Intel’s AI strategy for 2019 goes beyond chips

There’s a lot of people in the machine learning community who’ll tell you that deep learning (DL) is a dead end. Gadi Singer isn’t one of those people. He believes DL is just getting started, in fact he says we’re about to enter the next phase of AI, and deep learning at the edge is a huge part of that.

He probably knows what he’s talking about. After all most of the world’s AI compute cycles are run on his company’s architecture.

Singer is Intel’s Vice President, Artificial Intelligence Products Group Architecture General Manager. He has the difficult task of figuring out what to do next with the company’s AI vision. Unlike Ray Kurzweil, Google’s AI futurologist, Singer is more grounded in the nuts and bolts of his company’s AI plans. So his predictions aren’t for a faraway future: they’re for 2019 and 2020.

According to him, Intel’s AI strategy is a three-pronged approach that embraces all of machine learning. He says DL is just starting to mature beyond its infancy, so his company needs to be prepared to provide solutions for its practical use at the edge.

When asked what that means for the field of AI, as a whole, he says “It is a next phase, it’s the next phase.”

The aforementioned three-point plan for the future is complex, but it breaks down to some pretty simple ideas:

  1. Developing a diverse array of chips to meet every need
  2. Acquiring talent and developing new technology
  3. Providing an intelligent, consistent software layer throughout its products

And that all sounds really corporate and fiscally responsible – or something – but, what exactly does it mean for Intel’s customers? During our interview, Singer stopped the conversation to ask his own question:

What architecture do you think most AI cycles are run on?

The obvious answer, if you’ve been paying attention for the past couple of years, is GPU. It’s also the wrong answer, says Singer:

It’s not GPU, and I’ll explain why. There’s, basically, two different kinds of tasks in deep learning: training and inference. Nvidia has a lot of the training cycles. But the Nvidia discussion is primarily a training discussion. In 2015, we estimate, the ratio of inference cycles to training cycles was 1:1. Today it’s 5:1 and moving towards 10:1 … Most inference tasks are run on CPUs, on Intel CPUs … Most AI compute cycles run on our architecture.

Intel’s vision for AI, specifically DL, in 2019 and 2020 involves ushering it out of the early experimental age and onto just about every physical object in the world. Intel wants its hardware in the hands of researchers, built into gadgets and wearables, and powering corporate and developer needs. Whatever you’re doing with deep learning, Intel’s plan is to provide a chipset and software environment that’ll suit those needs.

The path to 2019, and the “next phase” of AI, was unclear just a few short years ago though. According to Singer, deep learning is responsible for a “faster” and “more intense” paradigm shift than any he’s seen at Intel.

As a response to this most recent wave of deep learning advances, Intel has been gobbling up AI startups such as Movidius and Nervana to push the limits of what it can do with AI chips. It’s re-engineered its own CPUs from the inside out to provide more power for machine learning. And it’s begun developing software solutions to tie its AI ambitions together.

It’s clear, based on what Intel’s up to, that reports of deep learning’s death are greatly exaggerated. For a deeper dive into why Singer believes deep learning is so important, you can read his post “Deep Learning is Coming of Age” on Intel’s blog.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with