Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on August 2, 2016

MIT just changed the AR game with ‘Interactive Dynamic Video’


MIT just changed the AR game with ‘Interactive Dynamic Video’

In a world that’s soon to be dominated by AR, the ability to create a life-like environment revolves almost wholly around advanced 3D modeling, an expensive and labor-intensive process that yields great results but comes at the expense of hundreds (or thousands) of man hours. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) may have a better way.

Dubbed ‘Interactive Dynamic Video’ (IDV), the method uses traditional cameras and algorithms to scope out the almost invisible vibrations of an object to create a simulation users can interact with virtually. These items can be pushed, pulled and stacked — just like in real life — through use of a new imaging model that costs a fraction of 3D modeling, and involves only a camera and some image editing.

“Computer graphics allows us to use 3D models to build interactive simulations, but the techniques can be complicated,” says Doug James, a professor of computer science at Stanford University who was not involved in the research. “[Abe] Davis and his colleagues have provided a simple and clever way to extract a useful dynamics model from very tiny vibrations in video, and shown how to use it to animate an image.”

While algorithms already exist to track motion in video and magnify them, they struggle to accurately simulate these objects in unknown environments. Most of this work takes place with controlled lighting and backgrounds (like a green screen) as a way to better control the environment and thus create higher-quality models.

IDV works differently. Instead of shooting on a green screen, the algorithm can use information in existing video to create these simulations, even in a non-controlled environment. To do this, the team analyzed video clips to find “vibration modes” of these objects at different frequencies, each representing a distinct way the object could move. By identifying the vibration modes’ shapes, the researchers found they could better predict the way the models would move in a realistic simulation.

IDV - Vibration Modes

“This technique lets us capture the physical behavior of objects, which gives us a way to play with them in virtual space,” says CSAIL PhD student Abe Davis, who will be publishing the work this month for his final dissertation. “By making videos interactive, we can predict how objects will respond to unknown forces and explore new ways to engage with videos.”

The implications for this are huge in the VR/AR space, although Davis mentions it has many possible use cases outside of it as well, from helping filmmakers reduce the cost of making films, to architects determining if a building is structurally sound.

“The ability to put real-world objects into virtual models is valuable for not just the obvious entertainment applications, but also for being able to test the stress in a safe virtual environment, in a way that doesn’t harm the real-world counterpart,” says Davis.

It can also improve AR/VR games, like Pokémon Go. Niantic’s smash hit uses augmented reality to drop digital characters into real-world environments. IDV can do this too, only its fictional characters can actually interact with the environment in a realistic way, such as bouncing off walls or interacting with obstacles (or avoiding them entirely).

From Pokémon to architecture, IDV has hundreds of potential use cases and one thing is certain: this is the type of technology it’s going to take to move VR forward. IDV is a great start.

Get the TNW newsletter

Get the most important tech news in your inbox each week.