Human-centric AI news and analysis

Nvidia uses AI to make video calls way better

The company has created a new platform for developers of video-conferencing apps

Nvidia today unveiled a new platform for developers of video-conferencing apps, which the company says can cut the bandwidth consumed by video calls by a factor of 10.

The cloud-native platform — named Nvidia Maxine — also offers AI effects including gaze correction, super-resolution, noise cancellation, and face relighting.

The system slashes the bandwidth requirements of the H264 video compression standard by using AI to analyze the “key facial points” of every person on the call — rather than streaming the entire screen of pixels. The software then reanimates each face in the video on the other side. This compression technique could both cut costs for providers and create a smoother experience for consumers.

[Read: Are EVs too expensive? Here are 5 common myths, debunked]

The announcement comes amid an explosion in video calls caused by the COVID-19 pandemic. Nvidia says that more than 30 million web meetings now take place every day, and that video conferencing has increased tenfold since the beginning of the year.

“Video conferencing is now a part of everyday life, helping millions of people work, learn and play, and even see the doctor,” said Ian Buck, vice president and general manager of Accelerated Computing at Nvidia.

“Nvidia Maxine integrates our most advanced video, audio, and conversational AI capabilities to bring breakthrough efficiency and new capabilities to the platforms that are keeping us all connected.”

Developers can also use the platform to add virtual assistants, translations, closed captioning, transcriptions, and animated avatars to their video conferencing tools.

Computer vision developers, software partners, startups, and computer manufacturers creating audio and video apps and services can now apply for early access to the platform.

Published October 5, 2020 — 17:41 UTC