Tristan GreeneEditor, Neural by TNW
Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: He/him
Quantum computers are fragile miracles of physics that are unreliable, cost-prohibitive, and more error-prone than a shortstop with no depth perception. But, if we ever want to get to Star Trek levels of technology, we probably need them. To make them useful we have to make them reliable. And that’s a pretty tall order.
The problem, in a nutshell, is noise. Quantum computers operate by exploiting strange laws of physics that allow otherwise-impossible computations to be performed. Unfortunately, performing quantum computations creates quantum decoherence, or noise as its commonly called.
Think of it this way: qubits are like cans of beer.
If you’re not sure what a qubit is, you may want to read “Understanding Quantum Computers: The Basics” before you go any further.
It’s pretty easy to carry a six-pack of unopened beers around. You can walk, run, hop, skip, and jump and its unlikely you’ll spill any of the unopened beverages. Once you open them, however, you’ve removed the environmental barriers to the liquid inside. It becomes more difficult to carry all six open containers without spilling anything.
Now, start exponentially increasing the number of open beers around, and it’s obvious that there’s a certain threshold for loose cans of beer that a human can carry. Maybe you can walk around with 20 cans, maybe someone else can balance 40, but eventually there’s a number no person can manage. Worse, we know that even if we don’t spill a drop or take a sip the mere act of opening the cans means the beer inside is slowly going to evaporate over time.
Quantum decoherence happens when qubits (cans) lose information (beer) to the environment (spilling/chugging/etc.) over time. The “timer” doesn’t start until we try to do something with qubits, like measure them or perform a computation. So, the second you open a can of quantum, it begins to lose its fizz as decoherence sets in.
To put it more boringly: there’s a certain threshold for noise – called fault tolerance – where quantum computers will theoretically be reliable enough to be considered useful. To date, researchers simply aren’t there yet.
Since qubits have multiple possible states, their combined computational ability scales exponentially as you add more of them. This means they become lousy with errors once they’re operating with just a handful of qubits. Unfortunately we need a lot of qubits to do anything useful with a quantum computer. This is the quantum physics version of the “Mo’ money, mo’ problems” thought experiment created by Biggie, et al..
Google’s 72 qubit Bristlecone processor, for example, is a marvel of modern technology that exists at the very cutting-edge of computer science. It’s not very useful yet though, because scientists have yet to overcome the noise problem. Even at “only” 72-qubits it’s still noisier than a fire at a dynamite factory.
Simply put, experts argue that error-correction is the biggest impediment to crossing the 100-qubit barrier (in the way that Google, IBM, and Microsoft are designing these systems). And some even say that no matter what, we’ll never overcome the noise problem.
On the other hand, leading research indicates we will — thanks to artificial intelligence.
Physicists from the Max Planck Institute for the Science of Light in Germany have developed an artificial neural network capable of learning error-correction techniques through trial and error. The work is still in the early stages, but it could lead to a robust system by which quantum computers could be scaled and still remain within fault-tolerance limits.
According to the team’s white paper:
Here, we show how a network-based “agent” can discover complete quantum-error-correction strategies, protecting a collection of qubits against noise.
The Max Planck Institute team isn’t alone. Physicists around the world continue to experiment with different methods to stall decoherence and manipulate qubits into carrying information with minimal loss. One team developed a “flux capacitor” for quantum networks that works like a tuning fork, for example.
Noise, or decoherence, is a huge problem for quantum computers. But current research trends indicate it’s far from an insurmountable one.
Check out our artificial intelligence section for analysis and news from the world of machines that learn. And surf on over to our science section to read more quantum computing stories.
Get the TNW newsletter
Get the most important tech news in your inbox each week.