Celebrate King's Day with TNW 🎟 Use code GEZELLIG40 on your Business, Investor and Startup passes today! This offer ends on April 29 →

This article was published on March 18, 2019

Why can’t Facebook keep problematic videos off its platform?


Why can’t Facebook keep problematic videos off its platform?

In the wake of the horrific shooting in Christchurch, New Zealand, that claimed the lives of 50 people and saw several more injured, Facebook noted it had removed some 1.5 million video clips of the attack on its platform.

That’s a good move to keep the gruesome content – which may serve to glorify the shooter, encourage others, and traumatize the families of survivors – from spreading like wildfire. But couldn’t the company do more to prevent it from being shared on the social network at all?

It’s true that we’re asking a lot of companies in this regard – it’s not an easy task keeping digital content offline. But major tech firms that control their own platforms should be held to high standards, particularly as they intend for massive audiences across the globe and across age groups to use them, and because they can afford to tackle such problems.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

In Facebook’s case, it seems an awful lot like a matter of prioritizing its needs over those of its community. While it may consider the health and hygiene of its platform as important, it also has metrics like daily active users and engagement to worry about. And provocative content can encourage the growth of such numbers.

The company noted it was able to knock off 1.2 million videos of the Christchurch attack as they were being uploaded, which is great – but TechCrunch found copies of the clip more than 12 hours after the incident on the site, which is not so great.

How does this happen? On the one hand, it’s commendable that the company was able to act quickly on a large number of videos before they could hit the platform, using audio detection and the help of human content moderators (of which it has around 10,000 worldwide).

On the other, it’s worrying that Facebook allows this sort of content on its platform at all. If it wanted, it could penalize those users who uploaded this video and variations of it, perhaps with a ban or a warning.

It could also delay videos from being published until they’d been reviewed by humans. Sure, that introduces friction into the process of sharing content on Facebook, but that seems like a fair price to pay to keep users safe. If you don’t like these rules, you don’t have to use Facebook for your video sharing needs.

The exact methodology isn’t immediately important in this conversation; Facebook may have tech we’ve never seen before to throw at this problem, and we may not know about it. What’s important is that the company needs to take a firm stance on how it handles problematic content.

Rahul Gonsalves, the founder of Bangalore, India-based web and mobile design firm Obvious, explained, “At the scale that Facebook operates, any kind of reactionary product decision is evidence of the issue in question not being prioritized within the company.”

That makes sense to me: the only way Facebook – which has offered tools to host video and live broadcasts to a global audience for years – finds itself scrambling to take down videos and then missing a number of them is if it doesn’t care enough to tackle the problem before it occurs.

Granted, the solution to this requires a nuanced approach, so as to prevent information and news that’s of public interest from being stifled by platforms. Similarly, the context in which people are sharing clips matters as well, and it’s important to enable varied discussions with said content in the frame.

But unless Facebook takes a stand, it won’t be able to stop the spread of problematic content on its platform. That’s down to the company deciding how it wants to do business. It needs to decouple its policies for tackling content from its revenue targets.

That’s a hard call to take, and it needs to come from the top. Hopefully, it’ll be able to do that before the next major violent incident somewhere in the world. We need to stop pretending like it’s impossible to discern between what sort of content is problematic and what’s not.

TNW Conference 2019 is coming! Check out our glorious new location, inspiring line-up of speakers and activities, and how to be a part of this annual tech extravaganza by clicking here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with