This article was published on December 28, 2021

Physicists working with Microsoft think the universe is a self-learning computer

This pre-print paper broke my brain


Physicists working with Microsoft think the universe is a self-learning computer

A team of theoretical physicists working with Microsoft today published an amazing pre-print research paper describing the universe as a self-learning system of evolutionary laws.

In other words: We live inside a computer that learns.

The big idea: Bostrom’s Simulation Argument has been a hot topic in science circles lately. We published “What if you’re living in a simulation, but there’s no computer” recently to posit a different theory, but Microsoft’s pulled a cosmic “hold my beer” with this paper.

Dubbed “The Autodidactic Universe,” and published to arXiv today, the paper spans 80 pages and lays out a pretty good surface argument for a novel, nuanced theory of everything.

Here’s my take: Based on my interpretation of this paper, the universe was either going to exist or it wasn’t going to exist. The fact it exists tells us how that worked out. Whatever contrivance (law) caused that to happen set the stage for whatever was going to happen next.

The paper argues that the laws governing the universe are an evolutionary learning system. In other words: the universe is a computer and, rather than exist in a solid state, it perpetuates through a series of laws that change over time.

How’s it work? That’s the tough part. The researchers explain the universe as a learning system by invoking machine learning systems. Just like we can teach machines to perform unfolding functions over time, that is, to learn, the laws of the universe are essentially algorithms that do work in the form of learning operations.

Per the researchers:

For instance, when we see structures that resemble deep learning architectures emerge in simple autodidactic systems might we imagine that the operative matrix architecture in which our universe evolves laws, itself evolved from an autodidactic system that arose from the most minimal possible starting conditions?

It’s poetic, if you think about it. We understand the laws of physics as we observe them, so it makes sense that the original physical law would be incredibly simple, self-perpetuating, and capable of learning and evolving.

Perhaps the universe didn’t begin with a Big Bang, but a simple interaction between particles. The researchers allude to this humble origin by stating “information architectures typically amplify the causal powers of rather small collections of particles.”

What’s it mean? If you ask me, the game is rigged. The scientists describe the ever-evolving laws of the universe as being irreversible:

One implication is that if the evolution of laws is real, it is likely to be unidirectional, for otherwise it would be common for laws to revert to previous states, perhaps even more likely than for them to find a new state. This is because a new state is not random but rather must meet certain constraints, while the immediate past state has already met constraints.

A reversible but evolving system would randomly explore its immediate past frequently. When we see an evolving system that displays periods of stability, it probably evolves unidirectionally.

In illustrating these points, the researchers invoke the image of a forensics expert attempting to recreate how a given program came to a result. In one example, the expert could simply check the magnetic marks left on the hard disk. In that way, the results of the program are reversible: a history of their execution exists.

But if the same expert tried to determine the results of a program by examining the CPU, arguably the entity most responsible for its execution, it’d be much more difficult to do. There’s no intentional, internal record of the operations a CPU runs.

You’d have to examine how every particle that interacted with its logic gates during operations changed in order to begin to paint the historical picture of a computer program through internal observation of its CPU at work.

The consequences: If the universe operates via a set of laws that, while initially simple, are autodidactic (self-learning) and thus capable of evolving over time, it could be impossible for humans to ever unify physics.

According to this paper, the rules that governed concepts such as relativity may have had functionally different operational consequences 13.8 billion years ago than they will 100 trillion years from now. And that means “physics” is a moving target.

Of course, this is all speculation based on theoretical physics. Surely the researchers don’t literally mean the universe is a computer, right?

Per the paper:

We are examining whether the Universe is a learning computer.

Part of the theory seems to indicate the universe is a learning computer, in that the laws it’s currently constrained by were not set in stone at its inception.

We can’t reverse the universe, as a process, because there exists no record internally verifiable record of its processes — unless there’s a cosmic hard disk floating around out there in space somewhere.

In conclusion: Our scientists are stuck chasing last year’s physics models as the self-bootstrapped, autodidactic universe self-perpetuates its evolving laws throughout eternity.

This is a pre-print paper, so don’t consider it canonical just yet, but it passes the mustard on initial inspection. This all comes off a little “I just got back from the dispensary and had a few thoughts,” at first, but the researchers do a lot of leg work describing the kinds of algorithms and neural network systems such a universe would generate and, itself, be comprised of.

Ultimately the team describes this work as “baby steps” toward a broader theory.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top