Over the past few years, Facebook has been trying to answer a difficult question: How do you stop terrorists from spreading their hate online?
The social network, which now has more than 1.9 billion monthly users worldwide. It’s been frequently challenged to stem the flow of content and correspondence from terrorists in recent times, and it’s implemented numerous tactics to address the issue, with varying degrees of success.
Back in 2015, it took down the profile of one of the attackers involved in the San Bernardino shooting as it contained pro-ISIS content. It also said that it restricted the accounts of some pro-Western Ukrainians after they were accused of hate speech that year. These efforts were driven by Facebook’s own content monitoring mechanisms manned by humans, as well as reports from users.
That wasn’t enough to stop the families of three victims in the San Bernardino attack from suing Facebook last month for enabling the terrorists to spread their propaganda and put their loved ones at risk. And in April, the social network came under fire for failing to respond to reports of content depicting gruesome acts of terror by a journalist from The Times who’d set up a fake profile to test the company’s takedown mechanism.
Admittedly, while there’s a responsibility to police content and prevent the spread of hateful messaging, Facebook also has to tread so as to not stifle users’ freedom of speech, become a target for governments who want to censor social media, and invade people’s privacy in this effort.
In February 2016, The Wall Street Journal noted that Facebook had “assembled a team focused on terrorist content and is helping promote “counter speech,” or posts that aim to discredit militant groups like Islamic State.”
Is there a better, faster way of addressing this? Facebook believes that AI can help. For starters, it’s now begun using automated systems to identify photos and videos of terrorists by matching uploaded media against its database of flagged content, and prevents it from spreading across its network.
The company says it’s also attempting to analyze text posts to see if there are messages praising or support terrorist groups, so it can take further action. This is still in the works, and Facebook hopes its algorithms will become more effective as they encounter more data.
It’s also working on ways to identify material related to posts and groups that support terrorism, so as to sniff out clusters of sympathizers. Plus, it’s trying to identify fake accounts created by people who’ve been booted off the platform, so it can stop them in their tracks even if they go by a different name.
These measures are supported by a team of more than 150 experts focused solely on counterterrorism efforts.
That seems like a good start, but there’s clearly a lot more that can be done to quell the rise of terrorism. Last January, a number of top executives from Silicon Valley heavyweights like Apple, Google, Twitter and of course, Facebook, met with senior officials from the White House and US intelligence agencies to look at how they could collaborate to fight this battle together.
The company also partnered with Microsoft, YouTube and Twitter to build a shared database of hashes to accurately and efficiently identify content featuring terrorist imagery on their platforms.
So, what now? While Facebook works on improving its systems, it’s also hoping to crowdsource ideas for other measures it can adopt to tackle hate on its social network. Cleaning up Facebook serves the company’s own interest, but it also clearly will do society some good. Hopefully it’ll be able to leverage new technologies and ideas from smart people around the world to curb the spread of propaganda.