Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on August 21, 2019

Intel takes on Google and Amazon with 2 new AI-focused chips


Intel takes on Google and Amazon with 2 new AI-focused chips

Intel has unveiled two new processors as part of its Nervana Neural Network Processor (NNP) lineup with an aim to accelerate training and inferences drawn from artificial intelligence (AI) models.

Dubbed Spring Crest and Spring Hill, the company showcased the AI-focused chips for the first time on Tuesday at the Hot Chips Conference in Palo Alto, California, an annual tech symposium held every August.

Intel’s Nervana NNP series is named after Nervana Systems, the company it acquired in 2016. The chips were designed at its Haifa facility in Israel, and allow for training AI and inferring from data to gain valuable insights.

“In an AI empowered world, we will need to adapt hardware solutions into a combination of processors tailored to specific use cases,” said Naveen Rao, Intel VP for Artificial Intelligence Products Group. “This means looking at specific application needs and reducing latency by delivering the best results as close to the data as possible.”

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The Nervana Neural Network Processor for Training (Intel Nervana NNP-T) is equipped to handle data for a variety of deep learning models within a power budget, while also delivering high-performance and improving on memory efficiency.

Earlier this July, Chinese tech giant Baidu was enlisted as a development partner for NNP-T to ensure the development stayed in “lock-step with the latest customer demands on training hardware.”

The other — Nervana Neural Network Processor for Inference (Intel Nervana NNP-I) — specifically targets the inference aspect of AI to deduce new insights. By making use of a purpose-built AI inference compute engine, NNP-I delivers greater performance with lower power.

Facebook is said to be already using the new processors, according to a Reuters report.

The development follows Intel’s AI-based performance accelerators like Myriad X Visual Processing Unit that features a Neural Compute Engine to draw deep neural network inferences.

That said, the chipmaker is far from the only company to come up with machine learning processors to handle AI algorithms. Google Tensor Processing Unit (TPU), Amazon AWS Inferentia, NVDIA NVDLA, and Graphcore’s intelligence processing units (IPU) are some of the other popular solutions embraced by companies as the need for complex computations continues to increase.

It’s easy to see why. The AI boom has shaken up the market for computer chips in recent years.

Performing mathematical calculations in parallel — a hallmark of most advanced AI algorithms today — can be done more effectively on a graphics chip (or GPUs) that have hundreds of simple processing cores as opposed to conventional chips (CPUs) that have a few complex processing cores.

But unlike TPU — which has been specifically designed for Google’s TensorFlow machine learning library — NNP-T offers direct integration with popular deep learning frameworks like Baidu’s PaddlePaddle, Facebook’s PyTorch, and TensorFlow.

Intel said its AI platform will help “address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources.”

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with