Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on March 1, 2019

YouTube’s need to disable comments highlights how shitty the internet is


YouTube’s need to disable comments highlights how shitty the internet is

Headlines over the past several days have been dominated by stories about problematic content on major platforms, and the harrowing task of policing them.

The most recent bit of news comes from YouTube, which noted that it’s now disabled comments of tens of millions of videos featuring minors, in an effort to prevent predatory behavior in the comments section of these clips.

It’s not the first time YouTube has had to deal with users sexualizing children on its platform; as The Verge notes, the company has been tackling such issues since at least 2013. It’s a big step for a service whose communities of viewers around the world primarily interact with each other through comments.

It’s tragic that the company has had to resort to this drastic measure, despite having the wherewithal to deploy artificial intelligence and thousands of human content moderators to tackle violative content. But that’s the world we live in now, and that’s why we can’t have nice things on the internet, at least for the time being.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

We’ve seen people misuse online platforms for decades now, so this isn’t a new problem per se. However, we now have far higher expectations of hygiene and safety from these services, and technology hasn’t kept up with those needs. In YouTube’s case, it’s helped the company purge its site of hundreds of thousands of extremist videos faster than it could with a reasonably sized team of human reviewers – but such systems apparently can’t keep pace with skeezy commenters.

Should we squarely blame tech firms? I believe companies should certainly do more to ensure their services are safe to use as they scale up, and they should be held responsible for policy violations and harm that users face as a result of their failure to enforce said policies. At the same time, it’s important to remain cognizant of how big a challenge this is. For reference, YouTube delivers a billion hours of video per day, and some 1.9 billion users with accounts log in each month.

It’s in YouTube’s best interest to sanitize its platforms as best as possible. You might argue that being lax about policing comments and allowing alleged paedophiles to run loose there could be good for business, but think about all the money it stands to make from millions of people watching its videos instead of tuning into cable channels – and those are mostly videos that the company didn’t have to spend money to produce.

Yes, you could put even more people to work on moderating comments and videos. But that’s not a great option either, as we’ve learned in several stories chronicling the difficult lives of contracted content moderators since 2014. Trawling through problematic posts has reportedly caused many of these workers mental trauma, and led several of them to quit those low-paying roles in a matter of months. YouTube itself limited its workers to four hours a day. That’s a job you probably don’t want, so it’s not exactly fair to demand that many more people be tasked to do this.

Ultimately, artificial intelligence needs to get a lot better at flagging violative content and interactions on such platforms; at the same time, companies need to enforce their policies more stringently to keep bad actors out. Until then, maybe moves like disabling comments are indeed necessary – because we sure as hell can’t be arsed to act like decent human beings online.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with