Back in May, we reported on a UK woman who was taking Facebook to court to force the social network into revealing the identity of ‘trolls’ who had set up a false account in her name.
The woman in question ultimately won, with Facebook ordered to reveal the IP addresses of the perpetrators to police, for appropriate action to be taken against them.
Ever been to a tech festival?
TNW Conference won best European Event 2016 for our festival vibe. See what's in store for 2017.
Elsewhere, over in the States, Twitter finally complied with a police request for information regarding an anonymous Twitter user who was threatening an Aurora copycat-killing spree.
Dealing with trolls
The issue of trolling often leads to highly-charged debates around the best way to deal with anonymous muck-slingers. Indeed, I’ve previously argued that there is a very fine line between permitting freedom of speech online and giving a voice to people who really don’t deserve to have a voice.
Manchester City football player Micah Richards had to close his Twitter account after being bombarded with racial abuse. It’s often a little too easy for trolls to get a voice on the Web, and I argued that some form of verification would at least make it more difficult for people to hide behind a virtual mask of anonymity.
In the UK specifically, the punishments being meted out for so-called trolling have caused quite a stir, with some arguing that they’re not proportionate to the crime.
A young student in Wales was recently given a 56-day sentence after making what the judge called, “vile and abhorrent” comments. He posted some pretty offensive tweets about the collapse of Bolton Wanderers footballer Fabrice Muamba during a match, and these were forwarded to police by other Twitter users.
Along with a custodial sentence, the youngster’s academic work was also put into question as he was suspended from his studies for the rest of this year – it’s amazing what a 140-character message can do to your life.
Earlier this year, a law student who had sent former footballer Stan Collymore a series of racist tweets was given a two-year community order.
More recently, a teenager was arrested by police investigating abuse of UK Olympic diver Tom Daley on Twitter. After finishing fourth in the men’s synchronised 10m platform diving event, the athlete received a message telling him he had let down his father, Rob, who had died in May 2011 from brain cancer.
Besides his computer and phone being seized by police, the 17-year-old perpetrator was issued with an harassment warning, which will remain on his record.
These are just a few instances, but we’re certainly seeing examples made of those caught broadcasting ill-conceived opinions on the World Wide Web. In the UK, at least.
Against this backdrop, the BBC reports today that campaign groups and “experts” from Oxford University are saying that the punishments are far heavier than in other countries.
Bernie Hogan, from the Oxford Internet Institute, has been monitoring what happens in other countries and although he says that the UK was “leading the way” in curbing online abuse, he says that as a country we are “incredibly heavy-handed.” This assertion isn’t shared by the Association of Chief Police Officers (Acpo), however, which notes:
“People have a right to publish their views but when these views become indecent, threatening or offensive then the individuals they affect also have the right to report them. The police will assist with any prosecution.”
While the cowardly nature of trolling can’t be ignored, an issue we’ve covered in depth before, it surely is hard to justify an actual custodial sentence of almost two months? There are plenty of other crimes that could be construed as a lot more serious, that don’t end up with prison sentences.
However, it’s not like prison sentences are being dished out left, right and centre. Most of the time, warnings and fines have been considered adequate, and it’s difficult to view such punishments as ‘excessive’. People have to be held accountable for their actions…even on the Internet.
Also, trolls are often managed by the Twitter community itself. Racist comments and abuse aren’t generally well received online, and it would seem to be a far more sensible option to reel them in for questioning and a warning when someone is reported by the Twittersphere, or even ban them from Twitter (irrespective of how effective such a ban would be). Censorship? Sure, but the freedom-of-speech tenets can only be extended so far.
As it stands, we’ll likely see more trolls face the full force of the law – though here’s hoping the handful of high-profile cases we’ve seen so far in the UK will make folk think twice before unleashing a Twitter tirade.