Napier Lopez is a writer based in New York City. He's interested in all things tech, science, and photography related, and likes to yo-yo in Napier Lopez is a writer based in New York City. He's interested in all things tech, science, and photography related, and likes to yo-yo in his free time. Follow him on Twitter.
The metaverse. Everyone’s talking about it, major companies are working on the hardware to access it, and it increasingly seems like it could eventually be the next major communication platform on the scale of the world wide web.
But if you ask Intel, ‘eventually’ is still a long way off.
In its first real statement since everyone started chiming in on the metaverse after Facebook’s rebrand, Intel says that for immersive computing to really hit its stride, we will need a 1000x increase in computation efficiency from today’s very best tools.
Specifically, Raja Koduri, a senior VP at Intel, writes:
“Truly persistent and immersive computing, at scale and accessible by billions of humans in real time, will require even more: a 1,000-times increase in computational efficiency from today’s state of the art.”
He later adds that aside from hardware, we will need new software architectures and algorithms to make the metaverse a reality.
Of course, there’s no clear-cut threshold for how much computing power the metaverse will require. Some would say the metaverse already exists in a rudimentary form.
But Koduri’s statement brings up an important point: for the metaverse to provide convincing social interactions to a wide group of people, it’s likely we’ll need an enormous improvement in processing efficiency.
If we want the metaverse to be more than what amounts to VR and AR massively-multiplayer games — especially if we want to access the metaverse on wearable, practical devices — we simply need a lot more power.
Koduri is envisioning a metaverse that is more than basic avatars, describing encounters in the metaverse that would include “convincing and detailed avatars with realistic clothing, hair and skin tones – all rendered in real-time and based on sensor data capturing real-world 3D objects, gestures, audio and much more; data transfer at super-high bandwidths and extremely low latencies; and a persistent model of the environment, which may contain both real and simulated elements.”
It’s hard enough to manage all that with a spec’d out gaming PC and state-of-the-art hardware, let alone the all-in-one devices that’ll presumably power the metaverse of the future. Moreover, Koduri doesn’t even think hardware alone will be able to reach that 1000x number — at least not any time soon — instead suggesting that AI and software improvements will make up the gap.
Of course, realistic depictions of people and environments are only one part of the puzzle; one could argue that a far more important piece is creating the standards the metaverse will need to work. Still it’s refreshing to hear someone acknowledge that even if the metaverse is our inevitable destination, we’ve still got a long way to go.
Via The Verge
Get the TNW newsletter
Get the most important tech news in your inbox each week.