Yesterday, Twitter announced it’s testing a feature to filter out potentially offensive direct messages. Any DMs in the “Message requests” folder containing abusive or inappropriate content will be automatically moved to a section marked “Additional messages,” giving people the option to view the message or permanently delete it.
The safety feature will hide the message’s content and replace it with: “This message is hidden because it may contain offensive content.”
Unwanted messages aren’t fun. So we’re testing a filter in your DM requests to keep those out of sight, out of mind. pic.twitter.com/Sg5idjdeVv
— Twitter Support (@TwitterSupport) August 15, 2019
Alongside this, the micro-blogging site harnessed AI technology in April to automatically flag abusive tweets without relying on human intervention.
Because of these features, offensive tweets are now easier to report and take down. But these safety measures are yet to bring about long-term changes. Last year, a study by Amnesty International outlined the scale of threats made against women on Twitter. It labeled the social platform as “a toxic place” and the “worlds biggest dataset of online abuse targeting women.”
For women, and other groups that are subjected to online harassment, a lot of abuse comes in the form of direct messages, (the feature requires both parties to follow each other, or for the recipient to keep their inbox open to DMs from anyone).
Twitter’s recent step to curb the abuse found on its platform is similar to Bumble’s recent safety feature that uses to AI to automatically detect and blur offensive “lewd images” giving users the choice to view, block, or report the image to the app’s moderators.
Although it’s promising to see Twitter take action to protect women, and others subject to harassment, on the platform, more must be done to penalize those using the platform to abuse and harass others.