This article was published on March 17, 2012

The World Wide Soapbox: Let freedom of speech flourish, but don’t feed the trolls


The World Wide Soapbox: Let freedom of speech flourish, but don’t feed the trolls

“I disapprove of what you say, but I will defend to the death your right to say it,” wrote Evelyn Beatrice Hall in her biography of Voltaire. This much-quoted mantra is founded on sound principles, chiefly that freedom of speech is such an important tenet of society – regardless of the rights or wrongs of someone’s belief system, they should be allowed to say what they like. Whether people are prepared to listen is another thing altogether, of course.

But how far does this principle extend? There are surely limits to this? As we’ve seen with the advent of the Internet era, oftentimes people simply shouldn’t always have a right to a voice.

Before we look at the World Wide Web though, let’s look at the non-digital realm for a little real-world context.

When the UK’s far-right British National Party (BNP) won its first county council seats in 2009, in addition to two seats at the European Parliament, this heralded a mass of debate surrounding how much of a public soapbox the party should be permitted. It came to a head when BNP chairman Nick Griffin was invited to appear on the BBC’s flagship political debate programme Question Time, leading to protests and complaints from around the country.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

So why did the BBC allow Griffin to appear on the show? Well, Deputy Director General Mark Byford said that it wasn’t the BBC’s role to censor and that the BNP’s European vote meant it had to allow it on as part of its “responsibility of due impartiality”. Removing your own beliefs from the equation for a second, the party did secure somewhere in the region of a million votes from the public and, well, it’s difficult to argue with the BBC’s stance given its supposed nonpartisan political position.

Then there was British historian David Irving, who was sentenced to three years in jail by an Austrian court in 2006, over comments he made about the Holocaust during a speech and interview he gave in the country in 1989.

History is littered with similar events that have led to highly-charged debates. This helps to highlight the inherent struggle between living by the freedom of speech principles preached by democratic nations, and the desire to keep potentially inflammatory or offensive behaviour at bay.

Today though, there is inflammatory and offensive behavior all around us on the Web. Last month, we reported that the BBC had unmasked and confronted a real-life Internet troll, but as fascinating as this was, it won’t solve the underlying problem. You see, there’s plenty more where he came from, and the anonymity afforded by many online publishing platforms means they’ll continue to propagate.

The World Wide Soapbox

The greatest thing about the Internet is also its Achilles heel; everyone has a voice if they want one. The ‘World Wide Soapbox’ is an open-all-hours troll magnet, a platform where anyone can say what they want without fear of retribution. And this is one of the key differentiators between the online and offline sphere. You may wholeheartedly disagree with the decision to allow Nick Griffin a public platform, and you may think that the three years dished out to Irving was deserved, but at least they’re accountable for what they do and what they say. They can’t hide from their actions.

The New Haven Independent, a local Connecticut title, recently suspended commenting on its site. “The tone of commenting on the Independent…seems to have skidded to the nasty edges and run off the rails,” wrote the newspaper’s founder Paul Bass. “We’re responsible for reading, vetting, and posting all comments on the site. We’ve failed in our responsibility to keep the discussion on track.”

The commenting was reopened again two weeks later, with the publication announcing:

“After a two-week time-out on publishing reader comments, we have turned the switch back on. With your help, we spent the time debating how best to continue including our readers’ insightful, fun, passionate comments (the best in the business) without resuming publishing the sewer that our comments section had become over the past six months.”

After this two-week debate, what solution did it arrive at? Well, nothing particularly groundbreaking as it happens. “We’re keeping the same rules we had before but, a) enforcing them more strictly; and b) publishing far fewer comments, even if they meet the rules,” it said.

“We have come to understand that the comments section is no Hyde Park, no open forum for free-speech spouting”, it continued. “Our comment stream will be curated by experienced editors. Some comments – we expect far fewer than previously – will be posted; some will be forwarded to our writers to consider the input for follow-up reporting; some will be trashed for violating the rules; and some will be regretfully deleted with second guessing about whether we have made the right decision.”

So that was its innovative solution. Good old-fashioned censorship and human-vetting – and I have to say, if the comments section is to be a key part of a publisher’s site, it’s difficult to come up with a better answer, despite all the other tools at our disposal.

Finding a solution

This is often the fall-back for humans in the digital age. When we encounter new problems, we look to the past for precedent on how we should act. Before the Internet, newspapers would receive letters from readers – the most thoughtful and interesting ones would make it past the trashcan and onto the editor’s desk, the rest would go straight in the trashcan. Simple.

But it can be a costly, time-consuming exercise in the digital age. With publications looking to cut-back and streamline, having personnel sift through the comment rubble for golden nuggets perhaps isn’t the best use of resources. But are there other options?

Forcing readers to comment using Facebook Connect makes a hell of a lot of sense in principle. Most people use their real identities on Facebook but, speaking from my own perspective, I really don’t like commenting using my Facebook credentials and I know I’m not alone in this. I prefer my Facebook account to be a closed conduit for me to keep in touch with my friends.

Then there’s Twitter. Whilst I’d much rather comment using my Twitter profile, we’re back to square one with the ‘anonymous’ issue. Of course, if Twitter introduced a real-name policy, or some other form of verification, then this would work…but we don’t want that. The value of Twitter is that it does give everyone a voice, and as we’ve seen this past year with the Arab uprising, anonymity is a core underpinning facet of that.

However, this doesn’t mean we can’t reach some kind of compromise. Could Twitter introduce a policy whereby users can opt to ‘verify’ their identify, and in return receive additional privileges – such as the ability to post comments on publishers’ websites? This would at least go some way towards easing the tension between both arguments. Ultimately, I can’t help but think that Twitter, in the long term, will have to change the way it verifies accounts, something which The Next Web’s Drew Olanoff concurs with.

As a slight aside, Manchester City football player Micah Richards had to close his Twitter account last month after being bombarded with racial abuse. It’s too easy for trolls to get a voice on the Web, and some form of verification would at least make it more difficult for people to hide behind a virtual mask of anonymity.

Human intervention

I don’t think there is an easy solution that doesn’t involve at least some human intervention. The best scenario is likely this – if you want to comment on a website, you should have multiple options. For those happy to be completely open about who they are, they should be able to post immediately and their comments appear there and then, so that would likely be those using Facebook Connect, for example.

Those wishing to comment through anonymous means – be it a website’s own proprietary commenting system, Twitter or something else, then there will be a delay before the comment appears so that someone can vet it. It’s also an idea to have a delay process in place for those wishing to comment for the first time on a site – if you are using a website’s own commenting system, then leave a 24-hour period between setting up a new account and being able to comment. It won’t always work, but it should help cut back on some of the opportunistic, transient trolling.

Despite the fact technology has had such a monumental impact on society and the way we communicate, if the The New Haven Independent is anything to go by, human intervention will have to be mandatory if we want to maintain a decent standard of conversation on news sites. Whilst I would argue that anonymity is at the root of the problem, I don’t think forcing people to comment as themselves is ultimately the answer. Though I do believe it would cut out a lot of the abuse that takes place.

If we’re to look at the past for precedent, newspapers have never been able to insist on a real-name policy when accepting letters from readers. Simply ‘Anon, USA’ would suffice, and where ‘real’ names were used, there was no way of knowing if they were real. But it didn’t really matter, because all the letters had to be vetted before they could be published.

Another idea would be, if a budget can be found, to have dedicated comment curators – working remotely, in-house, full-time, part-time, interns…whatever – to keep the conversation on-topic and stop it from spiraling out of control. Sometimes the comments on an article make for an infinitely better read than the article itself, and many a verbal battle has been fought across news sites – but this is also fertile soil for trolls to ply their trade.

The ease with which angry individuals can post extreme or downright offensive ‘opinions’ online is an ongoing issue, and one that the digital media industry must tackle head-on. The Internet is an open forum and everyone should be allowed a voice within reason, and whilst I may not agree with everything someone says, for the most part I believe in their right to say it. But I also understand that there are times when censorship is needed – personal and potentially hurtful abuse shouldn’t be allowed.

We don’t want to see all the New Haven Independents out there shutting up shop to the public because of trashy commenting. There must be a better way to permit freedom of expression, without giving a voice to all the trolls.

Get the TNW newsletter

Get the most important tech news in your inbox each week.