Human-centric AI news and analysis

This article was published on January 13, 2021


Scientists figured out how to stop time using quantum algorithms

Scientists figured out how to stop time using quantum algorithms


Tristan Greene
Story by

Tristan Greene

Editor, Neural by TNW

Tristan covers human-centric artificial intelligence advances, politics, queer stuff, cannabis, and gaming. Pronouns: He/him Tristan covers human-centric artificial intelligence advances, politics, queer stuff, cannabis, and gaming. Pronouns: He/him

Everyone’s always talking about traveling through time, but if you ask me the ultimate temporal vacation would be just to pause the clock for a bit. Who among us couldn’t use a five or six month break after 2020 before we commit to an entire new calendar year? It’s not you 2021; it’s us.

Unfortunately, this isn’t an episode of Rick and Morty so we can’t stop time until we’re ready to move on.

But maybe our computers can.

A pair of studies about quantum algorithms, from independent research teams, recently graced the arXiv preprint servers. They’re both basically about the same thing: using clever algorithms to solve nonlinear differential equations.

And if you squint at them through the lens of speculative science you may conclude, as I have, that they’re a recipe for computers that can basically stop time in order to solve a problem requiring a near-immediate solution.

Linear equations are the bread-and-butter of classical computing. We crunch numbers and use basic binary compute to determine what happens next in a linear pattern or sequence using classical algorithms. But nonlinear differential equations are tougher. They’re often too hard or entirely impractical for even the most powerful classical computer to solve.

[Read next: How Netflix shapes mainstream culture, explained by data]

The hope is that one day quantum computers will break the difficulty barrier and make these hard-to-solve problems seem like ordinary compute tasks.

When computers solve these kinds of problems, they’re basically predicting the future. Today’s AI running on classical computers can look at a picture of a ball in mid-air and, given enough data, predict where the ball is going. You can add a few more balls to the equation and the computer will still get it right most of the time.

But once you reach the point where the scale of interactivity creates a feedback loop, such as when observing particle interactions or, for example, if you toss a heaping handful of glitter up in the air, a classical computer essentially doesn’t have the ooomph to deal with physics at that scale.

This, as quantum researcher Andrew Childs told Quanta Magazine, is why we can’t predict the weather. There’s just too many particulate interactions for a regular old computer to follow.

But quantum computers don’t obey the binary rules of classical computing. Not only can they zig and zag, they can also zig while they zag or do neither at the same time. For our purposes, this means they can potentially solve difficult problems such as “where is every single speck of glitter going to be in .02 seconds?” or “what’s the optimum route for this traveling salesman to take?”

In order to understand how we get from here to there (and what it means) we have to take a look at the aforementioned papers. The first one comes from the University of Maryland. You can check it out here, but the part we’re focusing no now is this:

In this paper we have presented a quantum Carleman linearization (QCL) algorithm for a class of quadratic nonlinear differential equations. Compared to the previous approach of, our algorithm improves the complexity from an exponential dependence on T to a nearly quadratic dependence, under the condition R < 1.

And let’s take a peek at the second paper. This one’s from a team at MIT:

This paper showed that quantum computers can in principle attain an exponential advantage over classical computers for solving nonlinear differential equations. The main potential advantage of the quantum nonlinear equation algorithm over classical algorithms is that it scales logarithmically in the dimension of the solution space, making it a natural candidate for applying to high dimensional problems such as the Navier-Stokes equation and other nonlinear fluids, plasmas, etc..

Both papers are fascinating (you should read them later!) but I’ll risk gross oversimplification by saying: they detail how we can build algorithms for quantum computers to solve those really hard problems.

So what does that mean? We hear about how quantum computers can solve drug discovery or giant math problems but where does the rubber actually hit the road? What I’m saying is, classical computing gave us iPhones, jet fighters, and video games. What’s this going to do?

It’s potentially going to give quantum computers the ability to essentially stop time. Now, as you can imagine, this doesn’t mean any of us will get a remote control with a pause button on it we can use to take a break from an argument like the Adam Sandler movie “Click.”

What it means is that a powerful-enough quantum computer running the great-great-great-great-grandchildren of the algorithms being developed today may one day be able to functionally assess particle-level physics with enough speed and accuracy to make the passage of time a non-factor in its execution.

So, theoretically, if someone in the future threw a handful of glitter at you and you had a swarm of quantum-powered defense drones, they could instantly respond by perfectly positioning themselves between you and the particles coming from the glitterplosion to protect you. Or, for a less interesting use case, you could model and forecast the Earth’s weather patterns with near-perfect accuracy over extremely long periods of time. 

This ultimately means quantum computers could one day operate in a functional time-void, solving problems at nearly the exact infinitesimally-finite moment they happen.

H/t: Max G Levy, Quanta Magazine

Also tagged with