Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on October 20, 2021

Here’s how the Pixel 6’s Google Tensor challenges other Android CPUs

It's a weird CPU core setup, but the real story is AI


Here’s how the Pixel 6’s Google Tensor challenges other Android CPUs

The Pixel 6 and Pixel 6 Pro we officially revealed today. Alongside them, we got some confirmation about the phone’s fascinating new Google Tensor SoC — and how it’s different from anything else currently out there.

There are few key ways Google is differentiating itself. One of the most peculiar, as predicted by leaks, is the arrangement of CPU cores.

Some background: Current flagship Android processors like the Exynos 2100 and Snapdragon 888 use three types of cores. The Snapdragon 888, for example, uses one Cortex X1 (high-power), three Cortex A78 (mid-power), and four Cortex A55 (low-power/high-efficiency) cores.

Where Google Tensor stands out is that it’s using two of the most powerful Cortex X1 cores clocked at 2.8 GHz, which should help the phone excel with some demanding tasks.

On the other hand, the company also confirmed another strange rumor: rather than triple mid-tier cores, the Pixel 6 uses just two. Even more puzzling, these won’t even be the newer A78, but instead the older, less powerful, and less efficient A76 cores (running at 2.25 GHz) released way back in 2018.

Meanwhile, the phone still uses four low-power A55 cores running at 1.8 GHz as well, while the GPU is the Mali G78MP20, which should offer gaming performance as good as any on Android device.

The use of the older A76 cores continues to be the biggest headscratcher, and we haven’t heard a clear reason for it yet.

For its part, Google says the processor is 80% faster than the Pixel 5; the GPU is 370% faster. But considering the Pixel 5 was spec’d with a Snapdragon 765, that’s not saying too much, other than that the processor should be roughly competitive with existing flagship Android chips.

An ArsTechnica interview with Google engineers sheds some light on the core layout decisions.

Phil Carmack, VP of Google Silicon, explains:

We focused a lot of our design effort on how the workload is allocated, how the energy is distributed across the chip, and how the processors come into play at various points in time. When a heavy workload comes in, Android tends to hit it hard, and that’s how we get responsiveness.”

He adds “when it’s a steady-state problem where, say, the CPU has a lighter load but it’s still modestly significant, you’ll have the dual X1s running, and at that performance level, that will be the most efficient.”

So there seems to be a greater bias towards having medium tasks run on the X1 cores, rather than the A76s when possible — hopefully leading to a more responsive phone. By dialing down the powerful cores more often “a workload that you normally would have done with dual A76s, maxed out, is now barely tapping the gas with dual X1s.”

As noted in the ArsTechnica article, having one big core is actually a recent development for ARM-based chips. Traditionally, these chips have used two or more high-performance cores. Apple, for its part, has continued to stick with a simpler divide high-efficiency and high-performance cores.

Carmack, meanwhile notes that “if you want responsiveness, the quickest way to get that, and the most efficient way to get high-performance, is probably two big cores,” while suggesting that having a single big core is only great for single-threaded benchmarks. 

Google Tensor chip

In any case, Tensor is not an Apple-level leap in performance over competitors — at least when it comes to basic CPU and GPU tasks. Where Tensor is really meant to shine is in AI and ML tasks with its new “TPU” or tensor processing unit (sometimes called a neural processing unit in other devices). There’s plenty of unexplored potential in this regard.

Unfortunately, Google was vague in the specs, other than showing off some tools enabled by Google Tensor, like HDR video at 4K 60 fps, the new Face Unblur feature, and the super-quick Live Translate implementation (read my hands-on for more).

The company suggests it’s reluctant to share numbers because existing ML benchmarks are “backward-looking,” but basically wants us to know that Tensor is meant to run Google’s own ML algorithms optimally. The company says some of these machine learning tasks simply can’t run efficiently on other Android devices; I’m curious what Qualcomm has to say about that.

I’d still like to know some more numbers though, just as a frame of reference. Rumors have also suggested that Google Tensor was created in partnership with Samsung, but the company made no mention of the fact during its announcement or press unveiling; we’ll have to wait for teardowns to see if people spot any Samsung components.

But more than anything Google Tensor is an opportunity for Google to achieve the cohesion between hardware and software that Apple is best known for — to have great control over every part of the user experience. In all likeliness, the best of Google Tensor is yet to come, and I can’t wait to see how other manufacturers respond.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top