Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on March 17, 2020

Social media firms will use more AI to combat coronavirus misinformation, even if it makes more mistakes

AI will have to make up for lack of human moderators


Social media firms will use more AI to combat coronavirus misinformation, even if it makes more mistakes Image by: Jason Howie

Social media platforms have been flooded with falsehoods, conspiracy theories, and exaggerations about the coronavirus since the outbreak emerged last December in Wuhan, China.

The outbreak has been falsely blamed on the 5G rollout damaging immune systems, an experiment gone wrong in a Chinese research facility, and, of course, the Rothschilds wanting more money, this time through their ownership of a patent to coronavirus.

More dangerous than the conspiracy theories is the misleading medical advice, even when it comes with good intentions. False claims that the virus doesn’t infect children and that the infection dies in temperatures above 27C have been seen by hundreds of thousands of people. If they follow the advice, they could put lives at risk.

Under pressure from governments and medical experts, tech firms are ramping up their efforts to combat the misinformation. On Monday, Google, Facebook, Microsoft, Twitter, YouTube, Reddit, and LinkedIn issued a joint statement announcing that they were working together to tackle the problem.

[Read: How Facebook’s new AI system has deactivated billions of fake accounts]

But as the misinformation grows and the tech giants start sending their staff home to work, these efforts are increasingly reliant on AI.

An imperfect solution

YouTube, its parent company Google, and Twitter have all announced that this is forcing them to rely more on AI moderation, while Facebook has stated that the loss of human reviewers would lead the company to “increase our reliance on proactive detection in other areas to remove violating content.”

The companies have acknowledged that this will lead to more accounts and content being unfairly removed.

AI moderation can struggle to match the accuracy of humans, particularly when there are fewer people to review the decisions. Google, Youtube, and Twitter have warned that the increased reliance on automated moderation will lead to more content being unfairly removed and that the appeals process may now also be slower.

Their decisions will disrupt the experience of both users and creators. It could even lead to another flow of misinformation if those who are unfairly removed claim that they’ve been targetted for political reasons.

But with politicians, journalists, and even medical authorities all peddling false narratives about coronavirus, who would you trust more to combat the problem: humans or AI?

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with