NVIDIA, a significant player in the AI field, recently worked with U.S. UMass Amherst University and UC Merced University researchers to develop AI artificial intelligence technology that can intelligently interpolate average 30fps quick frame calculations to obtain 240/480fps slow motion video. This is a massive benefit for video content creators. In some slow-moving shots, there is no longer a need for a dedicated high-frame-rate camera, as long as you can get fluent slow-motion video through AI operations.
The researchers mentioned that although mobile phones can now record slow-motion video at 240fps or even higher frame rates, due to the upper frame rates that require huge caches and data write systems, conventional devices are unstoppable and use them for long. The time record is impractical. While they use NVIDIA Tesla V100 graphics cards and cuDNN deep neural network to train more than 11,000 240fps videos per day, the trained model can insert standard 30fps video into 240fps video, and there will also be a separate database to verify their insertion. The accuracy of the frame content is by the original video effect. Most of the previous plug-in frame technologies use the relationship between prior and subsequent frames to generate more intermediate frames. NVIDIA and university researchers can use only one of the frames to do this, so the number of interpolation frames can be significantly increased and can be maintained. The image is not distorted and does not affect the viewing effect.
In fact, similar technology has long applied on AMD graphics cards. There is a technology named Fluid Motion on the AMD graphics card driver. It can also insert 30fps frames to 60fps through the considerable stream processor of the graphics card, and the process is not Need to use the CPU to calculate, deeply loved by many animation parties, this way can make the default animation produced by 24fps to become 60fps smooth and silky.
Source: Nvidia