The filter – a result of consulting with anti-harrasment groups – will let users block out specific words (think racial slurs, misogynistic terms, etc.) or entire hashtags. The company has previously provided tools to let you report abusers, but this would be the first time you can prevent harassment without having to actually see it first.
While filtering hashtags is particularly important for when the crowd goes into hive mind mode and coalesces under a single offensive idea (like gamergate), but I’m hoping the filter goes deeper than simply detecting specific keywords.
Trolls have been working their way around filters for ages, after all. Whether that means creatively misspelling slurs, creating entirely new ones, or saying offensive things in non-blatant ways, abusers tend can work around triggering simple keywords.
It’s 2016, so how about we use some of those fancy AI tools for detecting harassment? Doing so could make Twitter run the risk of venturing into censorship, but that’s the case with any content filter; it’s a matter of striking the right balance.
Besides, Twitter’s extremely public nature means it probably needs a filter more than any other platform, yet other have beaten it to the punch. Facebook just introduced a keyword system for Instagram comments
With constantly slowing growth, the pressure for an influx of new users is on. But if Twitter wants to pull in new users, it has to make sure they feel safe first.