Last week, my colleague Tristan wrote about an AI developer who used machine learning to upscale the famous 1895 train scene to 4K at 60 frames per second. While this was a great short watch, it made me wonder about using AI to restore and enhance old videos.
Thankfully, I stumbled upon a paper featured by the Two Minute Papers YouTube channel over the weekend that aims to improve and colorize these videos. The model uses Temporal Neural Network to identify and correct defects such as flickers in vintage videos.
[Read: Watch: AI developer upscales famous 1895 train scene to 4K at 60 FPS]
Satoshi Iizuka and Edgar Simo-Serra, co-authors of the paper explain that the aim of the model is to perform multiple tasks such as noise removal and colorization to enhance the quality of these videos:
The remastering of vintage films comprises of a diversity of sub-tasks including super-resolution, noise removal, and contrast enhancement which jointly aim to restore the deteriorated film medium to its original state. Additionally, due to the technical limitations of the time, most vintage film is either recorded in black and white, or has low quality colors, for which colorization becomes necessary.
To test their neural network’s mettle, researchers also compared how the model performs as compared to older models that aimed to colorize and restore vintage videos. Check out the video below to see that in action; the top-left video is the input, and the bottom right video is the output using this new neural network.
While the neural network takes care of the blemishes in the video, developers need to provide a reference image for colorization. To get to that, there are plenty ofAI models to colorize old photos around.
We haven’t seen commercial usage of this type of model being used by major movie studios to enhance old films. But, with improving neural networks, we can expect that Hollywood and other film industries across the world will take the help of AI in the near future.
You can read the full paper here and check out Two Minute Paper’s explanation on the model here.