Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on December 6, 2017

Can 10,000 employees keep YouTube free of objectionable content?


Can 10,000 employees keep YouTube free of objectionable content?

As if it isn’t hard enough to run a platform that serves up a billion hours of video per day across the globe, YouTube also has to contend with all sorts of problematic content uploaded to its site – from terrorist propaganda to disturbing material disguised as cartoons for children, the company has plenty to sift through.

Earlier this week, YouTube CEO Susan Wojcicki pointed to the gravity of the situation: since June, the company has taken down more than 150,000 videos depicting violent extremism. The task would have required 180,000 people working full-time to identify these clips – and not to mention the stress and trauma they’d have to endure in viewing and screening such content.

Thankfully, AI and machine learning have come to the rescue, assisting in the removal of 98 percent of extremist videos by flagging them, and helping YouTube take down 70 percent of such clips within eight hours of their being uploaded.

But there’s more work to be done yet, and so YouTube plans to grow the teams of people it employs to take action on violative content to 10,000 employees next year. But will that be enough to keep YouTube safe?

The tricky thing with content moderation is staying ahead of the curve: it’s practically impossible to predict, for example, that creators will one day upload clips depicting cartoon characters drinking bleach, or that users might trick the search engine into autosuggesting queries about incest.

In addition, platforms also need to be agile when it comes to drawing up and enforcing policies to tackle viewers’ concerns. Last month, YouTube expanded its takedown policy to cover not just videos that depicted violent extremism, but also content that includes “people and groups that have been designated as terrorist by the U.S. or British governments,” likely after feeling the pressure from various countries to clean up its act.

Beyond deploying manpower and technology to tackle questionable content, YouTube faces an enormous challenge in identifying and responding to its global audience’s idea of what sort of content is troubling, disturbing, and potentially harmful. With the power to allow anyone on the planet to publish content, comes the responsibility to police it to the standards that people expect from the world’s biggest video platform.

To understand just how difficult that is, look at what The Guardian unearthed as it investigated Facebook’s guidelines for content moderators. Some material, like images of animal abuse, are acceptable on the site, likely because they might be posted to draw attention to the issue, but others, like a comment that reads, “Someone shoot Trump,” are a no-no. In today’s world, working for a tech firm could mean that your job involves drawing lines in the sand to agree on what’s acceptable in society and what isn’t.

Therein lies the danger of exercising too much editorial control, and potentially stifling freedom of speech – but as I wrote last month, I believe it’s important to create a space that’s free of extremist content and safer for people across the world to use. I’d also argue that it makes business sense too: after all, YouTube can’t monetize videos that don’t align with its advertisers’ values, so is there much reason to allow extremists to spew hateful messages on the platform or allow people to post videos showing children playing ‘doctor’ with adults?

Will 10,000 people be enough to fight objectionable content on YouTube? We’re only as close to answering that as we are to predicting just how depraved people can be.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top