Last year, deepfake technology emerged as a dangerous form of technology, and social networks were scrambling around to form some kind of rules around it. Now, they’ve slowly started to build policies to possibly detect and control manipulated content.
These sets of rules are based upon an open survey result and feedback the company asked its users for last November. This new practice includes a framework on how Twitter will label tweets with manipulated content. However, some rules leave a lot of gray areas and put the onus on the company’s AI models and moderators.
First, let’s talk about how a tweet with detected manipulated content looks like. Starting from March 5, Twitter will display a label, reduce its visibility, and even show a warning to users who are about to retweet the tweet with modified media. The company will remove the tweet with such content if it threatens someone’s privacy or physical wellbeing.
We know that some Tweets include manipulated photos or videos that can cause people harm. Today we’re introducing a new rule and a label that will address this and give people more context around these Tweets pic.twitter.com/P1ThCsirZ4
— Twitter Safety (@TwitterSafety) February 4, 2020
Facebook rules left a lot of room for cleverly edited videos that might be used to spread misinformation. On the other hand, Twitter’s rules are a bit more clear about including media that is factually and contextually misinformative. Here are some factors based on which the company determines what falls under manipulated content:
- Whether the content has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing.
- Any visual or auditory information (such as new video frames, overdubbed audio, or modified subtitles) that has been added or removed.
- Whether media depicting a real person has been fabricated or simulated.
Twitter also says it’ll access the context of the tweet to determine the course of action — this is where the gray area lies. To understand the content, the company looks at:
- The text of the Tweet accompanying or within the media.
- Metadata associated with the media.
- Information on the profile of the person sharing the media.
- Websites linked in the profile of the person sharing the media, or in the Tweet sharing the media.
These conditions are quite unclear as to what kind of weightage these context rules carry when the moderation team is looking at the tweet. Traditionally, Twitter has been horrible at understanding context. They have repeatedly blocked people for sharing public information or suspending accounts for tweeting “kill me” ironically.
Fact-checking agencies often tweet out manipulated content to bust myths. So, the social network will have to take care of not removing those tweets. A report by The Verge suggests the company will work with third-party agencies to reduce errors:
The format that we’re using in our product to curate these sources is Moments. While we’re talking to a number of potential partners who we think have specific expertise in the area of media authenticity, we wouldn’t just be looking to feature tweets from only a select number of partners.
To its credit, Twitter admits this is a challenge and it will make some errors along the way. Hopefully, though this program the company at least will eradicate hoaxes and manipulated content related to climate change and health.
This framework also comes right in time as the US Presidential elections are slotted for later this year. We’ve seen plenty of deepfake videos featuring Bernie Sanders to Nancy Pelosi, and Twitter will surely want to remove them as soon as possible.
Get the Neural newsletter
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.Follow @neural