According to VP of Trust and Safety Del Harvey, and Director of Product David Gasca, Twitter has identified so-called troll behavior as such:
What we’re talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search. Some of these accounts and Tweets violate our policies, and, in those cases, we take action on them. Others don’t but are behaving in ways that distort the conversation.
In order to identify the behavior, the company has picked out certain patterns of behavior which it associates with troll-like accounts. For example, a user who signs up for multiple accounts at once, or users who repeatedly tag those who don’t follow them in tweets.
According to Twitter’s Safety account, these “signals” can be identified by an algorithm, and are tied to behavior, not the content of the tweets themselves. This would seem an attempt to shield the company from accusations of silencing anyone of a particular political persuasion.
Today we are introducing new behavior-based signals into how Tweets are organized and presented in areas like conversations and search.
This is to improve the health of the conversation and improve everyone’s Twitter experience.
— Twitter Safety (@TwitterSafety) May 15, 2018
When it’s identified troll-like behavior, Twitter will place the user in a sort of quarantine. Any of their interactions will be invisible to users unless they press the “show more replies” button under a tweet.
Note this doesn’t actually mean any content is removed. The problematic content is just hidden. If this sounds familiar, it’s very similar to a tactic called “shadowbanning,” a practice that Twitter has already been accused of using in the past.
Gasca and Harvey don’t say whether the proclaimed troll tweets would be demoted for everyone, or just for specific users they’ve been known to target. Either one would have its pros and its cons.
Still, Twitter’s discourse is often colored by such tweets, and banning the people involved seems like a good way to open the platform to cries of censorship (well, more open than it is already). This is an efficient compromise, though the effectiveness of the algorithm in detecting trolls would have to be demonstrated first.
The Next Web’s 2018 conference is almost here, and it’ll be 💥💥. Find out all about our tracks here.