Celebrate King's Day with TNW 🎟 Use code GEZELLIG40 on your Business, Investor and Startup passes today! This offer ends on April 29 →

This article was published on May 3, 2018

Facebook’s ‘hate speech’ test is a better idea than you might think


Facebook’s ‘hate speech’ test is a better idea than you might think

Earlier this week, several Facebook users noticed a new option on all their Facebook posts: a small button at the bottom asking “Does this post contain hate speech?” It appeared on the most innocuous posts, including cat photos and boba tea parties.

If you selected the “yes” option, you’d be directed to another choice. Those options were “Hate speech,” “Test P1,” “Test P2,” and so forth. Clearly the quiz was not meant for primetime, and Facebook later confirmed it was a test they were preparing which went live prematurely.

To say this attracted some derision and skepticism would be a bit of an understatement. Derision because the quiz was obviously a test, and skepticism for the purpose of it. What can Facebook’s users do to define hate speech that experts and Facebook themselves haven’t already done?

I say it’s worth finding out.

The quiz may have been prompted in part by questions posed to CEO Mark Zuckerberg during his Congressional testimony last month — specifically, he was asked by Senator John Thune what his company was doing to improve its detection of hate speech. Zuckerberg said he and his team were trying to develop AI familiar enough with the nuances of human speech to catch it, but that wouldn’t happen for another 5-10 years. Until then, he said, they would have to rely on human reporting.

I’m not suggesting this is something Facebook should deploy to all users — the potential for abuse is too obvious to ignore. But if you deployed it to a randomly selected group of users, you’d be more likely to come up with results not too tainted by bias — Facebook’s or anyone else’s.

To put it another way: this wouldn’t be a permanent fixture, with “hate speech” hovering under every innocuous food image on the site. But if it were deployed to some users for a while, and they reported everything they considered hate speech, then you would, with some margin of error, have a pool of user responses to the important question of what hate speech is. If properly reviewed by human eyes, the feedback of Facebook’s users on a sensitive issue which directly affects them could be invaluable.

For a better example of how this could look, check out Facebook’s infamous two-question survey from earlier this year, which was created so Facebook users could help the site identify trustworthy news sources. According to a Facebook spokesperson, the test wasn’t for everyone, and you couldn’t opt into it. It would be run with different sets of people who represented a cross-section of users. This ensured the site would get a variety of opinions, but not the overwhelming opinion of every last one of its billions of users.

A simple question like this doesn’t necessarily solve anything. But when the question of what constitutes unlawful and harmful speech affects Facebook’s users, it stands to reason you’d at least ask some of them the question.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with