Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on May 15, 2019

Facebook finally takes steps to limit abusive live streamers


Facebook finally takes steps to limit abusive live streamers

Almost two months after a horrific terrorist attack in New Zealand, Facebook’s finally taking some concrete steps to stop the abuse of its live streaming feature. Today, the social network introduced a new ‘one strike’ policy to ban users who violate its community guidelines.

Guy Rosen, Facebook’s VP of Integrity, said in a blog post that the platform will ban users for a period of time when they abuse its Dangerous Individuals and Organizations policy

From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.

He added Facebook will soon prevent banned people from creating ads as well. However, we’re not sure how long the platform will take to act on abusive users. Also, there’s no clarity on what happens after the ‘set period,’ when the user regains the ability to post live videos. We’ve asked Facebook to provide further details, and we’ll update this post accordingly.

We’ve argued before that Facebook is often reactive in its approach to remove or prevent abusive videos on its platform. While it removed 1.2 million videos within hours of the Christchurch attack, TechCrunch found copies of the clip spread across the site even after 12 hours.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

In today’s announcement, Facebook argued that it failed to weed out some videos because they were edited or manipulated. However, that’s not good enough reasoning for a company that prides itself in using AI and machine learning to keep harmful content out.

To prevent that from happening in future, the social network’s investing $7.5 million for a research partnership with the University of Maryland, Cornell University, and the University of California, Berkeley. The collaboration will look into new techniques to detect manipulated media. 

Let’s face it, Facebook has taken preventive steps in the past, but a lot of times they’ve failed to come to the fore. We’ll have to wait and see if these steps can be really effective.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with