Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on February 26, 2019

Content moderation jobs for social networks are still terrible


Content moderation jobs for social networks are still terrible Image by: Alexey Sokolov / Icons8

The Verge’s Casey Newton has a harrowing story out about the pitiful conditions that contract workers tasked with moderating content on Facebook have to deal with on a daily basis. It’s not the first such investigative piece on this topic, and perhaps that’s what worries me the most.

One of the reports on the horrific nature of this job came from Wired’s Adrian Chen back in 2014, who tracked contractors working on behalf of US-based companies like Facebook to a content moderation firm in the Philippines. The people working there were instructed to look through hundreds of posts a day, and keep an eye out for content depicting “pornography, gore, minors, sexual solicitation, sexual body parts/images, and racism,” so it could be taken down swiftly.

That story also mentioned how a lot of content moderation work is done in the US, and that’s the case with The Verge’s story from this week. Facebook currently has some 15,000 people around the world rifling through posts to flag and remove problematic material, and about 1,000 of them work at a facility managed by Cognizant in Phoenix, Arizona.

From the sound of things, the job hasn’t gotten any better. Salaries are above minimum wage, but not by a whole lot; people develop post-traumatic stress disorder, and some start to believe the conspiracy theories they’re hearing about far more often than the average user, as part of their job.

Even more troubling is the nature of the relationship between moderators and their superiors, who have to evaluate whether they made the right calls on videos and posts based on their understanding of Facebook’s content policies. Newton noted that these case reviews were sometimes subjective, and disagreements could lead to moderators’ ‘accuracy scores’ going down – putting their jobs at risk. That allegedly led to hostile behavior and to some quality assurance workers fearing for their safety at the office enough to carry concealed weapons.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

It’s disappointing to learn that employing artificial intelligence systems and thousands of humans for this task isn’t enough to stem the flow of content that violates the policies of social networks and media platforms. Back in 2016, Facebook noted that its users watched 100 million hours of video per day. Furthermore, we expect these services to be safe enough for children to use, from the posts to the comments.

As I’ve written before, this is clearly not an easy problem to solve – but it looks like companies haven’t done enough, or been able to do enough, to make it much easier on those with the difficult job of content moderation in the past few years – whether by developing more effective AI to automatically remove problematic content, or by improving working conditions and benefits to help people cope with the endless stream of disturbing posts.

Newton’s post is well worth a read, and includes his experience of a visit to the Phoenix content moderation facility; find it over on this page.

TNW Conference 2019 is coming, and its Future Generations track explores how emerging technology will help us achieve the 17 sustainable development goals, outlined by the UN. Find out more by clicking here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top