Celebrate King's Day with TNW 🎟 Use code GEZELLIG40 on your Business, Investor and Startup passes today! This offer ends on April 29 →

This article was published on February 26, 2024

Meta taskforce to fight EU election disinformation as deepfake fears grow

AI-generated content could swing votes


Meta taskforce to fight EU election disinformation as deepfake fears grow

Meta is launching a special task force dedicated to tackling disinformation and abusive AI-generated content in the lead-up to the EU elections in June. 

The power of social media to influence voting is well documented. But the rapid rise of AI — which can generate “deepfake” images, text, and videos at the push of a button — has triggered new fears that the technology will be used to disrupt major elections across the world this year.  

Led by a team of intelligence experts from within the company, Meta’s new “operations centre” has been set up to “swiftly identify potential threats” and implement “real-time mitigation strategies,” said the firm’s head of EU affairs, Marco Pancini. 

The announcement comes just weeks after TikTok put forth its preparations for the EU elections, which stand to be this year’s second-largest democratic vote in the world, behind India’s. 

Under the EU’s new Digital Services Act (DSA), online platforms with more than 45 million monthly average users — like Facebook and TikTok — are obliged to take measures against disinformation and election manipulation.

What is Meta doing?

Meta said it will remove content from its platforms Facebook, Instagram, and Threads that could “contribute to imminent violence or physical harm, or that is intended to suppress voting.”

Besides removing illegal content, Meta will expand its team of independent fact-checkers, adding three new partners in Bulgaria, France, and Slovakia. 

When content is “debunked” by these fact-checkers, Meta attaches warning labels and reduces its distribution in the feed so people are less likely to see it. When one of these labels is placed on a post, 95% of people don’t click through to view it, the company claims. 

“Ahead of the elections period, we will make it easier for all our fact-checking partners across the EU to find and rate content related to the elections because we recognise that speed is especially important during breaking news events,” Pancini said. 

The threat of AI-generated content

As part of Meta’s efforts to address AI risks, it will add a new feature for users to disclose when they share AI-generated video or audio. The company said it could even impose penalties for noncompliance, although did not specify what this would entail.

Advertisers who run ads related to social issues, elections, or politics on Meta platforms will also have to disclose if they use a photorealistic image, video, or audio that has been AI-generated.

Earlier this month, 20 tech companies, including Meta, Google, Microsoft, X, Amazon and TikTok, signed a pledge to crack down on AI content designed to mislead voters.

The firms aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. 

The power of AI to disrupt elections has already come under the spotlight. 

In the US, a political ad published by the Republican Party last year depicts a dystopian scenario should President Joe Biden be re-elected: explosions in Taipei as China invades, waves of migrants causing panic in the US, and martial law imposed in San Francisco.

In November, a recording of the Mayor of London Sadiq Khan circulated on social media. It called for Armistice Day commemorations to be postponed to allow for a pro-Palestinian march to go ahead instead.  

Both the video and audio were fakes generated by AI. Khan later warned that deepfakes could swing a close UK election.

“The era of deepfake and AI-generated content to mislead and disrupt is already in play,” British Home Secretary James Cleverly told The Times yesterday. 

The secretary warned that criminals and “malign actors” working on behalf of malicious states could use AI-generated “deepfakes” to hijack the general election.  

This warning comes amid the biggest election year in world history. It is estimated that 2 billion people around the globe will vote in national elections throughout 2024, including in the UK, US, India, South Africa, and 60 other countries.

One of the themes of this year’s TNW Conference is Ren-AI-ssance: The AI-Powered Rebirth. If you want to go deeper into all things artificial intelligence, or simply experience the event (and say hi to our editorial team), we’ve got something special for our loyal readers. Use the code TNWXMEDIA at checkout to get 30% off your business pass, investor pass or startup packages (Bootstrap & Scaleup).

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top