Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on January 12, 2017

Microsoft sued by employees who developed PTSD after reviewing disturbing content


Microsoft sued by employees who developed PTSD after reviewing disturbing content

In what sounds like the plot for an episode of the forward-looking dystopian show Black Mirror, Microsoft is being sued by two of its Online Safety Team employees over claims that they developed post-traumatic stress disorder (PTSD) in their roles at the company.

In the lawsuit filed at the end of last year (PDF), Henry Soto and Greg Blauert, who worked at Microsoft’s facility in King County, Washington, claimed that they were tasked with reviewing toxic content – including images and videos of child sexual abuse, people dying, and murder – for years, so they could take it down, and flag it with law enforcement and child protection agencies .

The suit alleges that Microsoft was negligent in its handling of the mental health of the employees on this team – even though the company extended programs and benefits to ensure the welfare of people in its the Digital Crimes Unit, which had similar duties.

The case raises questions not just about the mental health and well-being of people in these taxing roles, but also about the ways in which content platforms deal with such content, in terms of policies that are enforced and technology that is developed and implemented to tackle it.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Moderating media that may not suitable for all audiences is important for social networks and other platforms that host user-submitted content, so they can promise their users a safe and positive environment – and policies that outline what’s okay to share are a good start.

However, merely sticking a list of rules at the door doesn’t ensure that your users will follow them. That’s why technology that can automatically detect and flag such content is necessary. It’s already here: In recent years, giants like Facebook, Google Twitter and even Microsoft have worked to develop automated cloud-based solutions to spot images of child sexual abuse distributed through their services.

Unfortunately, it also requires human oversight, and that’s where people like Soto and Blauert come in. And they’re certainly not alone: As this excellent 2014 piece from Wired describes, there are scores of young college graduates in the US who take up content moderation jobs straight out of school, and hundreds of thousands more like them in developing countries where these tasks are outsourced to massive teams.

Even with access to therapy and counseling, the story notes, the arduous work of closely examining hundreds of pieces of content depicting the most depraved acts you can think of will certainly take a toll on your psyche.

Hopefully this case will bring to light the dark world in which people like Soto and Blauert have to survive, just to ensure our news feeds aren’t polluted by things most of us don’t have the stomach for. It’s also worth noting that in some countries where this work is outsourced, discussing mental health can be difficult – and the same goes for taking legal action against a global corporation.

Ultimately, it’s up to companies to fully understand what they’re asking of these employees, and to empathize with their situations – so as to improve the conditions in which they provide this valuable service and come up with innovative technology that can minimize our reliance on human involvement for content moderation.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top