This article was published on July 5, 2018

Facebook accidentally flagged the Declaration of Independence


Facebook accidentally flagged the Declaration of Independence

Facebook took down a post containing passages from the Declaration of Independence, saying they were hate speech. It has since apologized — though the algorithm may have been justified in flagging the material.

A Texas newspaper called the Liberty County Vindicator decided to celebrate July 4th by posting excerpts from the Declaration of Independence. The tenth post in this series, with paragraphs 27-31, was removed from the paper’s Facebook scheduling queue, with the paper being given the notice that it’d violated the hate speech standards.

Facebook eventually restored the post, which Vindicator editor Casey Stinnet acknowledged in an article on the newspaper’s website.

Facebook posted a Hard Questions blog post on its Community Standards, in which it addressed the intricacies of hate speech. In it, Richard Allan, VP of public policy said the company was trying to take into account context and intent behind speech, but acknowledged it was no always successful in doing so. He also said, “we’re a long way from being able to rely on machine learning and AI to handle the complexity involved in assessing hate speech.”

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

More recently, Facebook gave hard numbers on the number of problematic posts its algorithm caught before they were reported by users — and hate speech was the lowest, with a success rate of just 38 percent. Facebook VP of product management, Guy Rosen, at the time attributed the difficulty to determining context.

In the case of this historical document, editor Stinnet rightfully pointed out it contains the phrase “the merciless Indian Savages,” which… yeah. If that’s the phrase that tripped Facebook’s alarms, it’s understandable why it would consider that hate speech.

But this does point to a hole in the AI’s learning. Namely, it shows Facebook’s hate speech algorithm has little understanding of historical context and cultural dissonance behind those words — and, at the very least, should be fed the text of an important document such as this in order to know what to avoid in the future.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with