Facebook today announced it will ban misleading manipulated media — including Photoshopped images and deepfakes. This announcement comes right on time, ahead of the upcoming US presidential elections this year.
The declaration of the new policy comes just before Monika Bickert, Vice President, Global Policy Management, is set to testify against the House Energy and Commerce consumer protection subcommittee tomorrow to discuss how the platform will tackle manipulative media.
The social network will remove media from its platform if it satisfies any of these two criteria:
- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
The company said it won’t be removing modified content that’s supposed to be parody or satire. That means videos made for fun using less sophisticated techniques, called ‘Shallowfakes.’
The policy also doesn’t consider videos that “have been edited solely to omit or change the order of words.” So, there are chances that media such as a video edited to make Joe Biden sound racist released last week might be not be removed.
This is such a narrowly-drawn definition of what would be booted off the platform that it's baffling. You can cut up a speech to make someone say something diametrically opposed to what they actually said and it's ok. Edit a la the Nancy Pelosi video, it's ok. Weird.
— Chris Stokel-Walker (@stokel) January 7, 2020
Many critics are arguing that it’s a narrowly written policy that might let a lot of videos off the hook.
Naturally, the challenge for the tech giant will be detecting manipulated media and deciding its misleading. Last October, the company joined Amazon and Microsoft to help researchers to develop tools for better detection. In September, it also announced a Deepfake Detection Challenge program by partnering with universities. These partners pledged $10 million and released 5,000 videos to help developers.
However, it’s one thing to invest in a research project, and it’s another thing to implement policies in the same area when millions of posts are going live at the same time. Facebook says it’s consulting with more than 50 experts across the world with technical, policy, media, legal, civic and academic backgrounds to tackle this issue.
Social media has always had controversial takedown cases. Facebook is one of the first major social networks to lay down the norms for deepfakes. We’ll have to wait and see if it can do this job efficiently.