According to court documents, defense contractor Lloyd “Carl” Fields Jr. was killed in a November attack while training security forces in Jordan, an attack that ISIS has claimed responsibility for.
His widow, Tamara Fields seeks compensation and for a public judicial declaration that Twitter is violating the Anti-Terrorism Act, which would entitle her to damages if a family member was injured or killed by an act of international terrorism.
A Twitter spokesperson told The Next Web:
While we believe the lawsuit is without merit, we are deeply saddened to hear of this family’s terrible loss. Like people around the world, we are horrified by the atrocities perpetrated by extremist groups and their ripple effects on the Internet. Violent threats and the promotion of terrorism deserve no place on Twitter and, like other social networks, our rules make that clear. We have teams around the world actively investigating reports of rule violations, identifying violating conduct, partnering with organizations countering extremist content online, and working with law enforcement entities when appropriate.
It’s against The Twitter Rules to “make threats of violence or promote violence, including threatening or promoting terrorism.”
The network has taken action to suspend or delete propaganda accounts in the past and even drew praise from FBI Director James Comey, who called Twitter “thoughtful and hardworking” in efforts to shut down accounts associated with ISIS. As of April 2, 2015, Twitter had shut down over 10,000 ISIS accounts.
But, according to the plaintiff, the steps don’t go far enough.
Fields claims Twitter has refused to take “meaningful action” to stop the Islamic State from using its service. The lawsuit points out several examples of this, one of which was a single pro-Islamic State group with 335 active accounts on the social network.
One of the key elements discussed in court will be whether this claim has any merit as it pertains to Section 230 of the Communications Decency Act. Specifically, 47 U.S. Code § 230 — Protection for private blocking and screening of offensive material, which states:
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil Liability No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
No matter which way the court rules, the findings will have an impact on the conversation as it pertains to free speech, censorship and the global fight on terrorism.
In fact, the conversation between Silicon Valley and the US Government has already started. Tech executives met with counterterrorism officials last week to discuss ways in which they could work cooperatively to fight terrorism online.
According to a meeting agenda obtained by The Guardian, US officials want to make it harder for terrorists to recruit and mobilize new members, a move that Silicon Valley execs have been wary of, as doing so would weaken security for everyone, not just extremists.
Tech companies have proven willing to listen, but a compromise doesn’t appear close.