Nvidia teaches AI to create 240fps super slow-mo from 30fps video clips

Nvidia Tesla V100 GPU

Researchers at Nvidia have trained an AI system to accurately fake slow motion video. In recent years Nvidia has been slowly shifting focus towards AI applications, and one of the company’s latest research projects has been set on transforming slow motion video interpolation until it’s nearly indistinguishable from the real thing.

The convolutional deep learning system, powered by none other than Nvidia’s own Tesla V100 GPUs, can transform even 30fps video files into high-quality slow motion videos. The team used over 300,000 individual video frames across 11,000 240fps videos to teach the deep learning framework to recognise and predict extra frames with high accuracy.

We can’t promise you real-time AI video interpolation. But we can promise you great frame rates in-game. Here are the best graphics cards around.

The result is ‘super slomo’ footage that warps and fuses two input images, through the power of two U-Net neural networks, into intermediary frames with a high degree of precision. Instead of choppy and stuttering capture at less than ideal frame rates, Nvidia’s new deep learning system outputs smooth footage that is reproduced immaculately between ‘real’ frames.

The team’s approach is to use as many intermediate frames as needed for smooth playback – potentially nixing the need for unnecessary high frame rate and high data-use capture at the time of filming.

Nvidia AI slow motion comparison

“There are many memorable moments in your life that you might want to record with a camera in slow-motion because they are hard to see clearly with your eyes: the first time a baby walks, a difficult skateboard trick, a dog catching a ball,” the researchers say. “While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices.”

Nvidia posted a video of the tech at work to its YouTube channel, and it’s well worth a minute and a half of your time:

Video interpolation and predicting new frames to slow down footage is not an entirely new concept. Many other techniques are available and often employed to the same effect. However, Nvidia believes its approach is “consistently better than existing methods.”

While GPU compute power capable of taking on this AI task is still a way off for your standard home PC or laptop – especially if Nvidia’s Volta graphics cards don’t feature any Tensor Cores whatsoever – once this tech leaves the realms of research paper and enters the data centre it’s entirely possible a cloud server will be able to take on this mammoth interpolation undertaking for you if and when you need it.