This article was published on December 16, 2020

Facebook is reportedly developing AI to summarize news — what could go wrong?


Facebook is reportedly developing AI to summarize news — what could go wrong? Image by: Pexels

Facebook has been trying to get a foothold in the news space for many years. Last year, the company launched a dedicated section on its site called Facebook News for users in the US. It also wants to expand this program to other countries such as Brazil, Germany, and India.

But that’s not the only project the social network is working on in the news space. According to a report from BuzzFeed News, Facebook is testing an AI-powered tool called TL;DR (Too Long; Didn’t Read) to summarize news pieces, so you don’t even have to click through to read those articles.

The report noted that the company showed off this tool in an internal meeting last night. It’s also planning to add features such as voice narration and an assistant to answer queries about an article.

At the outset, this seems like a great idea — getting a short summary of an article you don’t have time to read, right. There are already some similar tools such as the AutoTLDR bot on Reddit. However, given Facebook’s sketchy history with news and publishers, there are many ways this could go wrong.

At best, the AI makes silly mistakes in parsing the article content, so you can’t make sense of the summary it spits out. After all, We’ve seen many incidents where bots picked out problematic portions of content from their training algorithms and spewed racist gibberish.

At worst, there’s potential to create or distribute misinformation. There are a ton of news sources on Facebook that are not known for their accuracy. If a skewed summary of those articles starts floating around, it might create more trouble.

Facebook will also need to train its algorithm to avoid taking quotes or sentences from articles out of context. A seemingly non-problematic summary could be contradicting the article or the subject and vice versa.

In the past, researchers have successfully tricked AI systems that are designed to detect toxic comments on the internet with ‘positive’ words. If the people behind propaganda operations could crack Facebook’s algorithm for summarizing articles, they could write stories in such a way that the summaries include the messages they want to spread.

Many reports have pointed out the social network’s massive misinformation problem, and a lot of it was because of poorly designed software. While Facebook’s TL;DR product is not public yet, it already sounds like it could be a disaster.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top