The heart of tech is coming to the heart of the Mediterranean. Join TNW in València this March 🇪🇸

This article was published on February 5, 2020

Twitter’s new manipulated media rules leave a lot of gray area

Twitter’s new manipulated media rules leave a lot of gray area Image by: #DeepFake (Deutsche Twitter Trends)
Ivan Mehta
Story by

Ivan Mehta

Ivan covers Big Tech, India, policy, AI, security, platforms, and apps for TNW. That's one heck of a mixed bag. He likes to say "Bleh." Ivan covers Big Tech, India, policy, AI, security, platforms, and apps for TNW. That's one heck of a mixed bag. He likes to say "Bleh."

Last year, deepfake technology emerged as a dangerous form of technology, and social networks were scrambling around to form some kind of rules around it. Now, they’ve slowly started to build policies to possibly detect and control manipulated content.

Last month, Facebook released its own set of rules to crack down on deepfakes. Yesterday, Twitter followed the suit and released their own guidelines and course of action for manipulated content.

These sets of rules are based upon an open survey result and feedback the company asked its users for last November. This new practice includes a framework on how Twitter will label tweets with manipulated content. However, some rules leave a lot of gray areas and put the onus on the company’s AI models and moderators.

[Read: Facebook vows to crack down on ‘misleading’ deepfakes]

First, let’s talk about how a tweet with detected manipulated content looks like. Starting from March 5, Twitter will display a label, reduce its visibility, and even show a warning to users who are about to retweet the tweet with modified media. The company will remove the tweet with such content if it threatens someone’s privacy or physical wellbeing.

Facebook rules left a lot of room for cleverly edited videos that might be used to spread misinformation. On the other hand, Twitter’s rules are a bit more clear about including media that is factually and contextually misinformative.  Here are some factors based on which the company determines what falls under manipulated content:

  • Whether the content has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing.
  • Any visual or auditory information (such as new video frames, overdubbed audio, or modified subtitles) that has been added or removed.
  • Whether media depicting a real person has been fabricated or simulated.

Twitter also says it’ll access the context of the tweet to determine the course of action — this is where the gray area lies. To understand the content, the company looks at:

  •  The text of the Tweet accompanying or within the media.
  • Metadata associated with the media.
  • Information on the profile of the person sharing the media.
  • Websites linked in the profile of the person sharing the media, or in the Tweet sharing the media.

These conditions are quite unclear as to what kind of weightage these context rules carry when the moderation team is looking at the tweet. Traditionally, Twitter has been horrible at understanding context. They have repeatedly blocked people for sharing public information or suspending accounts for tweeting “kill me” ironically.

The social network is infested with videos and photos tweeted by people with misaligned captions. The company will need to work quickly and effectively to label this kind of content.

Fact-checking agencies often tweet out manipulated content to bust myths. So, the social network will have to take care of not removing those tweets. A report by The Verge suggests the company will work with third-party agencies to reduce errors:

The format that we’re using in our product to curate these sources is Moments. While we’re talking to a number of potential partners who we think have specific expertise in the area of media authenticity, we wouldn’t just be looking to feature tweets from only a select number of partners.

To its credit, Twitter admits this is a challenge and it will make some errors along the way. Hopefully, though this program the company at least will eradicate hoaxes and manipulated content related to climate change and health.

This framework also comes right in time as the US Presidential elections are slotted for later this year. We’ve seen plenty of deepfake videos featuring Bernie Sanders to Nancy Pelosi, and Twitter will surely want to remove them as soon as possible.

Get the Neural newsletter

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Also tagged with

Back to top