Is Twitter responsible for removing accounts that may offend?
What if the account tweeted vulgar (even illegal in many countries) claims including:
“The main enemy of the Germans, the Jew, the Jew as the main enemy of all non-Jewish peoples is!”
“There were no gas chambers for mass extermination in Nazi concentration camp”
“The Jews have proven ritual murders of Christian children”
The account in question, @Heil_Hitler_88, is stacked full of Tweets that would have you arrested in most countries. (Update: Interestingly, as I wrote this post,
nearly all the tweets were deleted only to begin once again with equally controversial tweets in German.)
There has been plenty of precedent for the removal of this sort of thing, mainly on Facebook where multiple racist and pro-Nazi groups have been closed down by the social network. Just this month, a Facebook group calling for the “reopening” of an Austrian concentration camp was taken offline after winning more than 13,000 members.
In another case, the iPhone audiobook for Mein Kampf, a semi-autobiographical novel based on Hitler’s life, was eventually removed by Apple from the app store for using the swastika on the cover.
We have emailed Twitter for a response and have yet to hear back but the Tweeter who notified us about the account has personally been in touch with Twitter who has said Tit will refrain from policing the service. The only way the account can be suspended is if enough people report it as spam and block it, what are you waiting for?
But with this level of precedence should Twitter act without pressure or request from its users?