Nvidia’s new AI converts real-life videos into 3D renders

Nvidia’s new AI converts real-life videos into 3D renders

We’ve often wondered while playing games or experiencing virtual reality: how can this be closer to the real world? Nvidia might have an answer. The company has developed an AI which can turn video into a virtual landscape.

Nvidia has set up a demo zone at NeurIPS AI conference in Montreal to show off this technology. The company used its own supercomputer named the DGX-1, powered by Tensor Core GPUs to convert videos captured from a self-driving car’s dashcam. This setup made it possible to covert theory into tangible results.

The research team then extracted a high-level semantics map using a neural network, and then used Unreal Engine 4 to generate high-level colorized frames. In the last step, Nvidia uses its AI to convert these representations into images. Developers can edit the end result easily to suit their needs.

“Nvidia has been creating new ways to generate interactive graphics for 25 years – and this is the first time we can do this with a neural network,” Vice President of Applied Deep Learning at Nvidia, Bryan Catanzaro, said in a statement. “Neural networks – specifically  – generative models are going to change the way graphics are created.”

He added that this technology will help developers and artists create virtual content at a much lower cost than before.

This is particularly exciting for game developers and virtual reality content creators as they can explore new possibilities by drawing from the standard video. However, this technology is still in the development phase and requires a supercomputer. So we might have to wait a while until we see this on our consoles and desktops.

You can read more about Nvidia research here.

Read next: Microsoft is reportedly building a Chromium-based browser to replace Edge