The Vergeβs Casey Newton has a harrowing story out about the pitiful conditions that contract workers tasked with moderating content on Facebook have to deal with on a daily basis. Itβs not the first such investigative piece on this topic, and perhaps thatβs what worries me the most.
One of the reports on the horrific nature of this job came from Wiredβs Adrian Chen back in 2014, who tracked contractors working on behalf of US-based companies like Facebook to a content moderation firm in the Philippines. The people working there were instructed to look through hundreds of posts a day, and keep an eye out for content depicting βpornography, gore, minors, sexual solicitation, sexual body parts/images, and racism,β so it could be taken down swiftly.
That story also mentioned how a lot of content moderation work is done in the US, and thatβs the case with The Vergeβs story from this week. Facebook currently has some 15,000 people around the world rifling through posts to flag and remove problematic material, and about 1,000 of them work at a facility managed by Cognizant in Phoenix, Arizona.
From the sound of things, the job hasnβt gotten any better. Salaries are above minimum wage, but not by a whole lot; people develop post-traumatic stress disorder, and some start to believe the conspiracy theories theyβre hearing about far more often than the average user, as part of their job.
Even more troubling is the nature of the relationship between moderators and their superiors, who have to evaluate whether they made the right calls on videos and posts based on their understanding of Facebookβs content policies. Newton noted that these case reviews were sometimes subjective, and disagreements could lead to moderatorsβ βaccuracy scoresβ going down β putting their jobs at risk. That allegedly led to hostile behavior and to some quality assurance workers fearing for their safety at the office enough to carry concealed weapons.
Itβs disappointing to learn that employing artificial intelligence systems and thousands of humans for this task isnβt enough to stem the flow of content that violates the policies of social networks and media platforms. Back in 2016, Facebook noted that its users watched 100 million hours of video per day. Furthermore, we expect these services to be safe enough for children to use, from the posts to the comments.
As Iβve written before, this is clearly not an easy problem to solve β but it looks like companies havenβt done enough, or been able to do enough, to make it much easier on those with the difficult job of content moderation in the past few years β whether by developing more effective AI to automatically remove problematic content, or by improving working conditions and benefits to help people cope with the endless stream of disturbing posts.
Newtonβs post is well worth a read, and includes his experience of a visit to the Phoenix content moderation facility; find it over on this page.
TNW Conference 2019 is coming, and its Future Generations track explores how emerging technology will help us achieve the 17 sustainable development goals, outlined by the UN. Find out more by clicking here.
Get the TNW newsletter
Get the most important tech news in your inbox each week.