Nvidia’s researchers developed an AI that converts standard videos into incredibly smooth slow motion.
The broad strokes: Capturing high quality slow motion footage requires specialty equipment, plenty of storage, and setting your equipment to shoot in the proper mode ahead of time.
Slow motion video is typically shot at around 240 frames per second (fps) — that’s the number of individual images which comprise one second of video. The more fps you have, the better the image quality.
The impact: Anyone who has ever wished they could convert part of a regular video into a fluid slow motion clip can appreciate this.
If you’ve captured your footage in, for example, standard smartphone video format (30fps), trying to slow down the video will result in something choppy and hard to watch.
Nvidia’s AI can estimate what more frames would look like and create new ones to fill space. It can take any two existing sequential frames and hallucinate an arbitrary number of new frames to connect them, ensuring any motion between them is kept.
According to a company blog post:
Using Nvidia Tesla V100 GPUs and cuDNN-accelerated PyTorch deep learning framework the team trained their system on over 11,000 videos of everyday and sports activities shot at 240 frames-per-second. Once trained, the convolutional neural network predicted the extra frames.
The bottom line: Nvidia’s AI division continues to push the limits of what we think is possible. It creates people out of thin air and changes the weather in videos. But it might be awhile before we see anything like this embedded in our devices or available for download. The team has plenty of obstacles to overcome, and this research exists at the cutting edge of deep learning.
Get the TNW newsletter
Get the most important tech news in your inbox each week.