CYBER MONDAY WEEK 🤑 Get 30% off your TNW for Startups or Scaleups packages when you use code CYBER30 only until December 4 →

This article was published on November 14, 2017

IBM claims ‘quantum supremacy’ over Google with 50-qubit processor


IBM claims ‘quantum supremacy’ over Google with 50-qubit processor Image by: IBM Research

IBM researcher Edwin Pednault was doing the dishes one evening when he came to the realization that qubits are a lot like the bristles of a scrubbing brush. What he dubbed as a “seemingly inconsequential moment” became the basis of a fault-tolerance theory which makes the 50-qubit quantum computer possible.

Early last month Google’s quantum computer research team announced it had made strides towards what it dubbed “quantum supremacy.” The big idea was that a 50-qubit quantum computer would surpass the computational capabilities of our most advanced supercomputers, making it superior.

IBM, early this month, successfully built and measured an operational prototype 50-qubit processor.

The jury is still out on whether 50 qubits actually represents ‘quantum supremacy,’ thanks to some new ideas – from IBM of course – on how we can use classical computers to simulate quantum processes.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Pednault’s insight however, at least in part, was responsible for a new fault-tolerance capability that helped scale simulations of quantum processors as high as 56 qubits.

IBM’s 50-qubit processor is a phenomenal feat that happened far quicker than any expert predicted.  It also unveiled a 20-qubit quantum processor accessible to developers and programmers via the IBM Q cloud-based platform.

Prior to Pednault’s ‘eureka’ moment, 50 qubits was considered beyond our immediate grasp due to a problem with ‘noisy’ data.

Basically, the more qubits you have in play, the more their computations become susceptible to errors. This problem is compounded by the fact that qubits scale exponentially.

In a company blog post Pednault described it:

Two qubits can represent four values simultaneously: 00, 01, 10, and 11, again in weighted combinations. Similarly, three qubits can represent 2^3, or eight values simultaneously: 000, 001, 010, 011, 100, 101, 110, 111. Fifty qubits can represent over one quadrillion values simultaneously, and 100 qubits over one quadrillion squared.

Quantum computing is meaningless if a bunch of noisy qubits throw every calculation out of whack. Adding more processing power, to date, has been accompanied by an increase in errors.

The previous state of quantum computing could have been described as a ‘mo’ qubits, mo’ problems’ situation. IBM’s fancy new method for fault-tolerance, inspired by Pednault’s scrubbing brush, solves that problem by silencing the noise surrounding the science.

Realistically, the quantum computer might not prove truly useful until we reach processors with thousand-qubit capabilities, and the real exciting science-fiction stuff probably won’t come until we’ve developed quantum computers with million-qubit processors – assuming we overcome the fragility of the hardware.

But we won’t get there until someone makes a 100-qubit processor, and then a 200-qubit one, and so forth.

Once we’ve surpassed the capabilities of classical computers we’ll be able to simulate and understand molecular compounds in a much more detailed way. With this new technology comes the potential to eradicate diseases, eliminate hunger, and repair our environment.

In the race for “quantum supremacy,” there aren’t any losers. But IBM might just be winning.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top