Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on December 25, 2021

Research indicates the whole universe could be a giant neural network

All the universe is a neural network, and all the humans merely nodes


Research indicates the whole universe could be a giant neural network

The core idea is deceptively simple: every observable phenomenon in the entire universe can be modeled by a neural network. And that means, by extension, the universe itself may be a neural network.

Vitaly Vanchurin, a professor of physics at the University of Minnesota Duluth, published an incredible paper last August entitled “The World as a Neural Network” on the arXiv pre-print server. It managed to slide past our notice until today when Futurism’s Victor Tangermann published an interview with Vanchurin discussing the paper.

The big idea

According to the paper:

We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g. bias vector or weight matrix) and “hidden” variables (e.g. state vector of neurons).

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

At its most basic, Vanchurin’s work here attempts to explain away the gap between quantum and classical physics. We know that quantum physics does a great job of explaining what’s going on in the universe at very small scales. When we’re, for example, dealing with individual photons we can dabble with quantum mechanics at an observable, repeatable, measurable scale.

But when we start to pan out we’re forced to use classical physics to describe what’s happening because we sort of lose the thread when we make the transition from observable quantum phenomena to classical observations.

The argument

The root problem with sussing out a theory of everything – in this case, one that defines the very nature of the universe itself – is that it usually ends up replacing one proxy-for-god with another. Where theorists have posited everything from a divine creator to the idea we’re all living in a computer simulation, the two most enduring explanations for our universe are based on distinct interpretations of quantum mechanics. These are called the “many worlds” and “hidden variables” interpretations and they’re the ones Vanchurin attempts to reconcile with his “world as a neural network” theory.

To this end, Vanchurin concludes:

In this paper we discussed a possibility that the entire universe on its most fundamental level is a neural network. This is a very bold claim. We are not just saying that the artificial neural networks can be useful for analyzing physical systems or for discovering physical laws, we are saying that this is how the world around us actually works. With this respect it could be considered as a proposal for the theory of everything, and as such it should be easy to prove it wrong. All that is needed is to find a physical phenomenon which cannot be described by neural networks. Unfortunately (or fortunately) it is easier said than done.

Quick take: Vanchurin specifically says he’s not adding anything to the “many worlds” interpretation, but that’s where the most interesting philosophical implications lie (in this author’s humble opinion).

If Vanchurin’s work pans out in peer review, or at least leads to a greater scientific fixation on the idea of the universe as a fully-functioning neural network, then we’ll have a found a thread to pull on that could put us on the path to a successful theory of everything.

If we’re all nodes in a neural network, what’s the network’s purpose? Is the universe one giant, closed network or is it a single layer in a grander network? Or perhaps we’re just one of trillions of other universes connected to the same network. When we train our neural networks we run thousands or millions of cycles until the AI is properly “trained.” Are we just one of an innumerable number of training cycles for some larger-than-universal machine’s greater purpose?

You can read the paper whole paper here on arXiv.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top