You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on June 27, 2016

Should video sites like YouTube automatically remove extremist content?


Should video sites like YouTube automatically remove extremist content?

Reuters reported that Facebook and YouTube, which are among the world’s largest sources for distributing online video, have been quietly removing extremist content from their platforms using an automated system.

The method itself isn’t new: the removal system looks for file hashes, which are like numerical representations of data, for videos that are known to feature extremist messaging and then flags them so they can be deleted. Facebook, Google and Twitter partnered with The Internet Watch Foundation to use a similar tool for removing images of child sexual abuse from the Web last August.

However, neither Facebook nor YouTube have confirmed they’ve been doing this – nor would they discuss the criteria for labeling content as ‘extremist’.

The Counter Extremism Project (CEP), an organization dedicated to fighting extremism, announced earlier this month that it has developed a robot hashing system to identify questionable content so it’s easy to remove them before they go viral.

But according to Reuters’ sources, tech giants like Google and Facebook haven’t yet begun using the CEP’s system and are wary of introducing third-party tools into their content policing programs.

So are these companies doing right by us in removing extremist content? Seamus Hughes, deputy director of George Washington University’s Program on Extremism, told Reuters that ‘extremist content exists on a spectrum, and different web companies draw the line in different places.’

Plus, these companies aren’t publicly discussing their tactics and policies for handling such content. As such, the system is less transparent than it should be, particularly when you consider notions of freedom of expression.

However, both Facebook and YouTube have policies for controlling harmful content. For example, Facebook says that it will not tolerate ‘anything organizing real world violence’, and both platforms allow users to flag such content.

Given the kind of damage extremist videos can do when widely distributed, it seems that we’re probably better off with online services handling such content automatically. However, what we do need is oversight and the ability to review and correct the way these systems work over time, as well as ways to contest the accidental removal of safe content.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with