Today, Reddit has implemented a tool discussed earlier this week to help users curb harassment. The tool, described by the New York Times, is a new step in automating the regulation of the website’s more unruly sectors while keeping users safe.
It’s also about a decade too late. Way too late.
Ever been to a tech festival?
TNW Conference won best European Event 2016 for our festival vibe. See what's in store for 2017.
The new feature was explained in detail on the platform by one Reddit’s first engineers, Chris Slowe. Although blocking individual users from sending personal messages was available in a very basic form, the new tool functions similarly to Twitter’s block function: When you block a user, you will neither be notified of or see any content directed at you, including comments and replies. Blocks aren’t visible by the blocked, so there’s no risk of retaliation.
But really, why the hell did it take so long to get there?
Nestled within the Github page of the original block button lies a perfectly concise explainer of Reddit’s former philosophy:
[I]n general, we prefer to encourage the use of the downvote arrow for bad comments, and leave user-blocking for true harassment scenarios.
Compared to other social platforms like Facebook and Twitter, Reddit’s laid-back approach to conversation has developed into a precious attitude toward free speech. Of course, it’s hard to tell what a “true harassment scenario” would look like — especially considering the company itself has admitted to difficulty curbing toxic trolling on the site.
So yes, this feature is super late. But despite its perceived need, it’s likely that there will be pushback from redditors that believe the platform should continue to prize free speech. Then again, those who’ve been on the other side of spamming or a raid might find it comes in handy.