AI for news production is one of the areas that has drawn contrasting opinions. On one hand, it might help media houses produce more news in a better format with minimal effort, on the other hand, it might take away the human element of journalism or take people out of jobs.
In 2018, an AI anchor developed by China’s Xinhua news agency made its debut. Earlier this month, the agency released an improved version that mimics human voices and gestures. There’s been advancement in AI with text-based news with algorithms writing great headlines. China’s search giant Baidu has developed a new AI model called Vidpress that brings video and text together by creating a clip based on articles.
The company has currently deployed Vidpress on its short videos app Haokan and only works with Mandarin language. It claims that the AI algorithm can produce up to 1,000 videos per day, which is a whole lot more than the 300-500 its human editors are currently putting out. Vidpress can create a two-minute 720p video in two and a half minutes, while human editors take an average of 15 minutes to do that task.
To train this model, Baidu used thousands of articles online to understand context of a news story. Additionally, the company had to train AI models for voice and video generation separately. However, in the final step, the algorithm syncs both streams for a smooth final video.
When you feed the AI algorithm a URL, it automatically fetches all related articles from the internet and creates a summary. For instance, if your input is a story on Apple launching a new iPhone, the AI will fetch all details related to the launch including specs, and price. Now, to create a video, it will search for related pictures and clips in your media library and on the web.
Baidu said its AI can also detect trends on social media and create videos on related topics. The company says it aims to provide Baidu to news agencies and creators so that they can turn their posts into videos with synthesized narratives.
The search giant said the AI only looks at most trusted sources to create content but didn’t provide a list of those sources. While the company is running this model in a controlled environment right now, it might have to put certain guardrails when it’s released to more people. Moreover, it will also need to assign editors to check if someone’s producing videos that spread misinformation or violating copyrights.
Get the Neural newsletter
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.Follow @neural