Facebook has launched the Online Civil Courage Initiative (OCCI) with the German government and civil liberties organizations in the country with the “objective of combating extremism and hate speech on the Internet.”
The company’s COO Sheryl Sandberg appeared in Berlin, where the project will be headquartered, alongside Gerd Billen from the Federal Ministry of Justice, to announce the €1 million, marketing-focused initiative to banish hate speech from the Web across Europe.
Another conference. “Great.”
This one’s different, trust us. Our new event for New York is focused on quality, not quantity.
Hate speech has no place in our society – not even on the Internet. Facebook is not a place for the dissemination of hate speech or incitement to violence. With this new initiative we can better understand the challenges of extremist statements on the Internet and better respond to it. Together we can ensure that the voices of peace, of truth and tolerance will be heard. The best cure for bad ideas are good ideas. The best remedy for hate is tolerance. Counter Speech is incredibly strong – and it takes time, energy and courage.
The initiative will bring together experts in the field with the hope of developing responses that can be used by organizations across the internet to deal with the issue.
Tools for people to “get involved in the field of counter speech itself” will also be developed by the OCCI, the announcement says.
Facebook already has a vast, human-powered infrastructure for reporting and, if appropriate, policing comments made by people on its platform. So it’s not unlikely that it will extend these capabilities in its bid for a hate-free world.
The company recently signed an online safety partnership with Germany’s Avarto and a spokesperson from Facebook told TNW:
Through this investment, Facebook wants to make sure that reports about content that may violate our community standards can be dealt with even more effectively. We want to have the best community operations team with the right languages and skills to respond quickly to reports and check them diligently, 24 hours a day, every day.
But the potential for tools that actually enable people to counter something they have seen on Facebook, rather than simply report, is actually the more interesting prospect.
Facebook wouldn’t give anything away on this, but is it really considering offering its users some kind of tool to challenge a view they don’t like?
Admittedly there’s no global strategy for tackling hate speech online, but no one voted for Facebook to be in charge of leading its creation.
Hate vs offence
In international law, hate speech is pretty well defined as “the advocacy of hatred based on nationality, race or religion” and various treaties require governments to prohibit it.
But the laws around hate speech are balanced against the right to freedom of expression, including the right to “shock, offend or disturb.”
While the petition to ban Donald Trump from the UK for his comments about Muslims got the most support ever seen from the public, which meant it was debated by MPs, many came down in favor of challenging rather than silencing his views.
Lots of people felt it was hate speech worthy of a nationwide ban, many didn’t. Democracy in action, you might say, between citizens and their representatives. But who has the overall say?
With Facebook’s stated “objective of combating extremism and hate speech on the Internet” it looks like that task may well be handed over to it in Europe.
Surely, though, given that “hate speech” is a globally recognized issue, only a supranational organization like the UN can convene the relevant state actors, perhaps with private companies, to work out the best approach if it’s decided that it must be policed?
Global hate police
Facebook doesn’t have a direct monetary incentive for appointing itself the global hate speech police. Perhaps it’s only reputational gain, or the desire to be seen to be pro-active on issues that affect its platform tangentially, that are behind the push.
Perhaps it genuinely wants to make the world a less hate-filled place. Either way, giving a private company control of our views, the ideas we are allowed to share and ultimately our values is a dangerous path to walk.
In a previous noble task taken on by the company, it started offering a ‘safety check’ feature for people caught up in global disasters. This was initially only deployed for global disasters that Facebook felt relevant, which attracted criticism when those same tools weren’t deployed for other emergency situations.
In some ways, Facebook is stuck between a rock and a hard place. Perhaps it was just trying to help in the face of a world of problems. But it was controlling what we deemed important, nonetheless.
On top of this, it can already tell us who we’re allowed to be online via its controversial real-names policy, is grappling with deciding who gets to access what bits of the Web in its tough Free Basics roll out, and its algorithm already decides what news it thinks we should see.
And now it’s getting an ever-greater role in telling us exactly what we’re allowed to say?
The root causes of hatred are different, difficult and complicated so of course any research into this is welcome.
But efforts like this from the world’s favorite platform are liable to just move people to other parts of the Web if they want to express certain views.
You also have to wonder whether this kind of action will reduce legitimate political debate as people self-censor in fear of what might happen if they ask certain questions in particular places.
Indeed, the UK government has launched its own hub to help combat ‘extremism, radicalisation, hate,’ pointing to education and conversation as the basis for understanding and dealing with these issues.
If we think appointing Facebook to police the Web can substitute efforts by society to tackle alienation, the effects of war, or inequality, then we’re as ridiculous as Donald Trump.