Reynaldo Gonzalez’s daughter, Nohemi, was among the 130 killed when religious extremists attacked Paris last year. Now, he’s suing Twitter, Facebook and Google for facilitating the spread of “extremist propaganda” after alleging the trio “knowingly permitted” ISIS to recruit, raise money and spread its message across each of the respective platforms.
According to court documents:
“For years, [the companies] have knowingly permitted the terrorist group ISIS to use their social networks as a tool for spreading extremist propaganda, raising funds and attracting new recruits.
This material support has been instrumental to the rise of ISIS, and has enabled it to carry out numerous terrorist attacks, including the 13 November 2015 attacks in Paris, where more than 125 were killed, including Nohemi Gonzalez.”
Lawsuits like this are especially troubling, as each company goes to great lengths to police its ranks and remove offending content.
Having a team of moderators capable of viewing hundreds of millions of collective pieces of new content a day is a task that even large companies can, and do, struggle with. It’s a statistical impossibility to maintain that any company of this size can review — or even find — all instances of offensive content.
For now, we’re stuck with random manual review, user flagging offensive content and artificial intelligence that scans — and often finds — content that requires human moderation.
As AI continues to improve, you’ll see fewer instances of extremism and other offensive content. For now, think of it as the price of admission to an open Web.