Our universe is a ridiculous place. It’s where all the silliest things we’re aware of happen. And chief among the silliness is the wacky idea of time.
Don’t get me wrong, the metaverse is a strong second. Especially Facebook’s Meta’s agonizingly dysfunctional approach to building it.
But time’s even stranger than changing the name of the world’s most widely-known technology company to something that literally means “self-referential.”
Time is the opposite of self-referential. If it exists in a tangible, physical form, then we might be living in a simulated universe — our own bespoke layer in the metaverse. This might sound weird, but it’s actually pretty intuitive.
In this scenario, for whatever reason, someone or something created a simulated reality and put us in it. This reality is made of discrete chunks of spacetime. From our point of view, this spacetime is the bedrock of our universe. From the creator’s, it’s the bits that make up our data.
This all begs the question: what if time doesn’t exist? What if time is just a measurement and we’re living in base reality? If that were true, we’d have to figure out what reality is actually made of.
And that’s where physics concepts such as string theory, parallel universes, and dark matter come in. They’re all theoretical ways of explaining away the need to describe the universe in the kinds of terms we can intuit and recreate.
It’s a much more interesting article, however, if we take a leap and assume that time does exist.
What is time?
We’ve covered the concept of timespace as discrete chunks extensively here at Neural.
Here’s some recent articles touching on the subject:
- Did the world actually end in 2012?
- Classical physics make time travel impossible. But what about ‘timeless’ travel?
- How the laws of physics could prevent AI from gaining sentience
- Researchers figured out how the human brain makes memories
However, let’s suffice in saying that there’s no empirical definition of time that would satisfy our desire to determine its place in our universe.
We’ll have to view the concept of time from a more measurable frame of reference.
Let’s imagine a one-second video of a dandelion swaying in the breeze.
Even though one second is a very short duration, it’s still plenty of time for our eyes and brains to pick up on any motion and figure out exactly what’s going on.
Go ahead, try it: close your eyes and try to picture a swaying dandelion as you count a full “one-one thousand” in your head. See? It’s doable.
If your imagination were a standard, typical HD TV, it would be displaying that video at a refresh rate of 60hz. And if the video were recorded under the most common settings it would either display at 24 frames-per-second (FPS) or 30.
Let’s add two more facts to the mix before we bring it all together and explain what these numbers mean.
- Scientists believe the human eye can perceive approximately 30-60FPS.
- Experiments show that some people may even be able to perceive motion at up to 75FPS.
If we assume the universe is made up of discrete chunks of spacetime, we can theorize a maximum frame-rate.
Unfortunately we don’t currently have any way of estimating how many FPS the universe or base reality runs at. We can talk in terms of measurements, such as the speed of light or the size of a Planck unit, but we can’t be sure either of those perceived extremes represent true limits in the universe.
No matter what, we’re stuck dealing with assumptions because of our limited perspective.
What’s this got to do with the metaverse?
We’re fish in an aquarium trying to understand our relative position to the outside world. From our point of view, the universe follows at least two different sets of rules — Newtonian physics and quantum physics. But what if we’re only seeing a tiny fraction of the whole picture?
Spyridon Michalakis, the physicist who consulted on Marvel’s Ant-Man films, recently discussed the concept with Vox’s Alex Abad-Santos:
Let’s say we only perceive 100 frames per second, something like that. We can be aware of our lives and choices we make, but then the frame rate of the universe where you could be flickering between different timelines is 40 orders of magnitude above that. It’s one with 40 zeros.
Then we make the best approximation.
We’re all trying to figure out the plot of the universe by just watching the beginning and the end of the movie, the first and last frame. We’re just reconstructing the in-between the best we can. That’s where the multiverse hides; it hides there in between frames. Honestly, I think that the frame rate of the universe truly is infinite, not even finite, very, very large. And we’re so far away from that.
It’s the last line that piqued my interest: “And we’re so far away from that.” How far away is “so far?”
Because I remember when video games looked like this:
Now they look nearly photo-realistic. Have you seen some of the early Unreal Engine 5 demos? They’re breathtaking.
In another 30 years, it could be impossible to differentiate between VR and reality without some form of buffer to indicate which one you’re perceiving.
Right now, millions of gamers pay premium prices for displays and graphics cards capable of running games at frame-rates in excess of 120FPS and at refresh rates in excess of 120hz, despite the fact that there’s no indication the human eye or brain can perceive motion at these rates.
Why? Because we can. Someone probably demonstrated some sort of secondary benefit to increasing frame-rates that made it easy enough to market these gonzo systems to overeager gamers.
At some point, if we keep pushing the limits of FPS and refresh rates, we’ll be developing systems capable of displaying graphics at resolutions and frame-rates no human could ever perceive — which seems a lot like recording an entire music album in tones and frequencies we can’t hear.
But these systems could be useful in teaching AI to detect nuances at the quantum level (or in the “quantum realm” as Ant-Man would say) that humans couldn’t — even if they shrunk themselves down.
Okay, so?
Here’s the payoff: one day, maybe 30 years from now… maybe 300… it’s possible our endeavor to build the most robust metaverse possible — an immersive experience that goes far beyond merely fooling the human visual cortex — will provide us with the ground truth about base reality.
If time is indeed discrete chunks, the architects of the metaverse could eventually train an AI to dial in the universe’s frame-rate and literally see the individual chunks.
And, by then re-building the metaverse out of digital chunks that emulate the universe’s timespace chunks in size, speed, and mass, we would be creating a one-for-one model of our universe, inside our universe.
This would almost certainly indicate that our universe is either part of a physical multiverse, or that it’s a simulation. And the multiverse we created? It would be a simulation inside of a simulation. You can see where this is going.
Then again, maybe time isn’t discrete. If that’s the case, then all this talk of FPS and resolution is moot. If there are no chunks, there can’t be gaps between them. And that means there can’t be any frames.
Get the TNW newsletter
Get the most important tech news in your inbox each week.