Jane Manchun Wong, a security researcher who has a history of revealing yet-to-launch features through reverse engineering, tweeted today that Twitter is testing a “replies moderation” tool.
Twitter is testing replies moderation. It lets you to hide replies under your tweets, while providing an option to show the hidden replies pic.twitter.com/dE19w4TLtp
— Jane Manchun Wong (@wongmjane) February 28, 2019
According to Wong, this feature allows users to hide replies under their tweets, while providing an option to show the hidden replies to other users. Since this was announced, multiple Twitter users have raised concerns over what this means for the community with some referring to the feature as “a double-edged sword.”
On Wong’s thread, another user noted the implications this could have on the spread of disinformation pointing out that a bias could be created.
So now nobody will be able to fact check politicians in line.
Great.
— Lesley Carhart (@hacks4pancakes) February 28, 2019
In Twitter’s defense, Wong noted how the platform will also give people the option to view the hidden replies, which she refers to as “a new opportunity for people to call them [politicians] out.” But a designer mentioned in Wong’s thread how she believes “user lazyness” will prevent people from seeing a tweet’s hidden replies as the option is placed at the bottom of the tweet’s settings.
That doesn’t sound like moderation, it sounds like mark sensitive. If it works that way that’s ok but still I wonder how many people click the show sensitive now, how many fact checks will be silenced by user lazyness
— Circuit Swan / was Amazonv (@CircuitSwan) February 28, 2019
This feature would give users’ the responsibility to moderate their own content, while taking the pressure off Twitter to control what is said on its platform. It feels like a cheap win. Moderation is basically a labor problem; hiring (way) more moderators would go a long way.
With this new feature it seems like they’re shoveling over that responsibility to people who now have to choose whose comments they choose to hide – with all the risks of backlash – instead of just deciding on a clear set of rules that Twitter can apply to everyone.
As Twitter is under a lot of pressure to clean up its community on its platform, this tool could be a step in the right direction. This potential new feature could mean good news in creating a safer environment with users having the option to remove hateful or inappropriate mentions. But it could be used to hide information and create a bias.
As Wong points out, Twitter is one of the most popular platforms for political debates, so let’s hope people go the extra mile and look for tweets on both sides of the argument if this feature rolls out. But if history teaches us anything, we shouldn’t get our hopes up too high.
Get the TNW newsletter
Get the most important tech news in your inbox each week.