Earlier this week, several Facebook users noticed a new option on all their Facebook posts: a small button at the bottom asking “Does this post contain hate speech?” It appeared on the most innocuous posts, including cat photos and boba tea parties.
Yep, Facebook is asking whether my friend's boba tea meetup at a Pet Expo is hate speech. pic.twitter.com/eFMct3YXr5
— Gene Park (@GenePark) May 1, 2018
If you selected the “yes” option, you’d be directed to another choice. Those options were “Hate speech,” “Test P1,” “Test P2,” and so forth. Clearly the quiz was not meant for primetime, and Facebook later confirmed it was a test they were preparing which went live prematurely.
To say this attracted some derision and skepticism would be a bit of an understatement. Derision because the quiz was obviously a test, and skepticism for the purpose of it. What can Facebook’s users do to define hate speech that experts and Facebook themselves haven’t already done?
Facebook — an industry leader in Artificial Intelligence who thinks that AI is going to solve all of its problems — is asking me if this post contains hate speech pic.twitter.com/4MqrGw8PFj
— brian feldman (@bafeldman) May 1, 2018
I say it’s worth finding out.
The quiz may have been prompted in part by questions posed to CEO Mark Zuckerberg during his Congressional testimony last month — specifically, he was asked by Senator John Thune what his company was doing to improve its detection of hate speech. Zuckerberg said he and his team were trying to develop AI familiar enough with the nuances of human speech to catch it, but that wouldn’t happen for another 5-10 years. Until then, he said, they would have to rely on human reporting.
I’m not suggesting this is something Facebook should deploy to all users — the potential for abuse is too obvious to ignore. But if you deployed it to a randomly selected group of users, you’d be more likely to come up with results not too tainted by bias — Facebook’s or anyone else’s.
To put it another way: this wouldn’t be a permanent fixture, with “hate speech” hovering under every innocuous food image on the site. But if it were deployed to some users for a while, and they reported everything they considered hate speech, then you would, with some margin of error, have a pool of user responses to the important question of what hate speech is. If properly reviewed by human eyes, the feedback of Facebook’s users on a sensitive issue which directly affects them could be invaluable.
For a better example of how this could look, check out Facebook’s infamous two-question survey from earlier this year, which was created so Facebook users could help the site identify trustworthy news sources. According to a Facebook spokesperson, the test wasn’t for everyone, and you couldn’t opt into it. It would be run with different sets of people who represented a cross-section of users. This ensured the site would get a variety of opinions, but not the overwhelming opinion of every last one of its billions of users.
A simple question like this doesn’t necessarily solve anything. But when the question of what constitutes unlawful and harmful speech affects Facebook’s users, it stands to reason you’d at least ask some of them the question.
Get the TNW newsletter
Get the most important tech news in your inbox each week.