Twitter is taking some real steps to curb abusive content. While the platform has made numerous small updates to make it easier to report and punish trolls, today’s series of announcements are meant to make abusive tweets less visible in the first place.
First up, Twitter will now prevent users who have previously been banned from coming back onto the platform under a new username. Twitter isn’t saying exactly how (perhaps to prevent abusers from figuring it out), but it’s an important step forward. Even if the most dedicated trolls find a way around it, sometimes simply being a deterrent is enough.
“We're hunting for awesome startups”
Run an early-stage company? We're inviting 250 to exhibit at TNW Conference and pitch on stage!
Second, the platform will implement a safe search filter, turned on by default. Both potentially sensitive tweets and tweets from blocked or muted accounts will be hidden from search results, but you have the option to opt out of either of those filters.
Finally Twitter will collapse “abusive and low-quality” replies so they don’t take up space deserved by less sucky tweets. You’ll still be able to access them by tapping on a “show less relevant replies” button.
Twitter tells me it’s using a machine learning to pick out said low quality replies, looking at certain red flags. For example, if your tweet gets a nasty response from a newly created account with zero followers and who doesn’t follow you, chances are it’s not contributing anything meaningful to the conversation. Thus, it will be hidden.
Twitter isn’t giving an exact time frame for when these feature will be fully rolled out, and using machine learning to determine what tweets are safe will likely lead to some mistakes, but it’s refreshing to see Twitter making real progress towards becoming a safer environment. The company says it’s constantly working on making its AI smarter, and will continue to roll out updates in the “days and weeks ahead.”