Since the start of 2017, Twitter has been working hard to help tackle abuse on its platform by introducing features to help you avoid tweets and messages from blocked accounts in search and your inbox, as well as preventing banned users from signing up with a fresh username.
Win a trip to Amsterdam!
We've teamed up with Product Hunt to offer you the chance to win an all expense paid trip to TNW Conference 2017!
In statements to Mashable and TechCrunch, Twitter noted that it was presently testing this measure, and that profiles will be hidden only if you’ve adjusted your settings to filter potentially offensive material.
But it’s worth noting that tech analyst Justin Warren – whose account was greyed out when Mashable spotted the new safety feature in action – wasn’t informed that his profile was being hidden, and didn’t know of any tweets of his that may have triggered the filter (his profile has since been unflagged and is visible even to logged-out users).
With this new initiative, Twitter is heading towards difficult terrain. It’s one thing to train bots to look for curse words, hate speech and images depicting nudity, but it’s another to slap on an R rating on entire profiles – particularly if those users aren’t even aware of how their accounts look to outsiders.
Imagine a potential employer looked up your profile and found that it was greyed out; it’s possible they could get the wrong impression about your online presence. Or, if you had an important idea to share, but people couldn’t see your tweets because you cursed once. Those are difficult situations for Twitter to wrangle itself out of.
Having said that, I’m not rummaging around for my pitchfork just yet. Twitter tests features like this all the time, and it’s doing so at a time when it’s actively looking for ways to make its platform better for users. It just needs to be careful that it doesn’t end up censoring its users in the bargain, and stifle free speech when the company is in the business of enabling it.