This article was published on November 14, 2017

IBM claims ‘quantum supremacy’ over Google with 50-qubit processor

IBM claims ‘quantum supremacy’ over Google with 50-qubit processor Image by: IBM Research
Tristan Greene
Story by

Tristan Greene

Editor, Neural by TNW

Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: He/him

IBM researcher Edwin Pednault was doing the dishes one evening when he came to the realization that qubits are a lot like the bristles of a scrubbing brush. What he dubbed as a “seemingly inconsequential moment” became the basis of a fault-tolerance theory which makes the 50-qubit quantum computer possible.

Early last month Google’s quantum computer research team announced it had made strides towards what it dubbed “quantum supremacy.” The big idea was that a 50-qubit quantum computer would surpass the computational capabilities of our most advanced supercomputers, making it superior.

IBM, early this month, successfully built and measured an operational prototype 50-qubit processor.

The jury is still out on whether 50 qubits actually represents ‘quantum supremacy,’ thanks to some new ideas – from IBM of course – on how we can use classical computers to simulate quantum processes.

Pednault’s insight however, at least in part, was responsible for a new fault-tolerance capability that helped scale simulations of quantum processors as high as 56 qubits.

IBM’s 50-qubit processor is a phenomenal feat that happened far quicker than any expert predicted.  It also unveiled a 20-qubit quantum processor accessible to developers and programmers via the IBM Q cloud-based platform.

Prior to Pednault’s ‘eureka’ moment, 50 qubits was considered beyond our immediate grasp due to a problem with ‘noisy’ data.

Basically, the more qubits you have in play, the more their computations become susceptible to errors. This problem is compounded by the fact that qubits scale exponentially.

In a company blog post Pednault described it:

Two qubits can represent four values simultaneously: 00, 01, 10, and 11, again in weighted combinations. Similarly, three qubits can represent 2^3, or eight values simultaneously: 000, 001, 010, 011, 100, 101, 110, 111. Fifty qubits can represent over one quadrillion values simultaneously, and 100 qubits over one quadrillion squared.

Quantum computing is meaningless if a bunch of noisy qubits throw every calculation out of whack. Adding more processing power, to date, has been accompanied by an increase in errors.

The previous state of quantum computing could have been described as a ‘mo’ qubits, mo’ problems’ situation. IBM’s fancy new method for fault-tolerance, inspired by Pednault’s scrubbing brush, solves that problem by silencing the noise surrounding the science.

Realistically, the quantum computer might not prove truly useful until we reach processors with thousand-qubit capabilities, and the real exciting science-fiction stuff probably won’t come until we’ve developed quantum computers with million-qubit processors – assuming we overcome the fragility of the hardware.

But we won’t get there until someone makes a 100-qubit processor, and then a 200-qubit one, and so forth.

Once we’ve surpassed the capabilities of classical computers we’ll be able to simulate and understand molecular compounds in a much more detailed way. With this new technology comes the potential to eradicate diseases, eliminate hunger, and repair our environment.

In the race for “quantum supremacy,” there aren’t any losers. But IBM might just be winning.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with

Back to top