This article was published on March 2, 2018

Jack calls in reinforcements to measure Twitter’s toxicity


Jack calls in reinforcements to measure Twitter’s toxicity

Twitter’s CEO has officially waved the white flag and called for outside help fixing the site’s toxicity. Specifically, he’s asked for help defining a metric of “conversational health” on Twitter.

While it’s good to see Twitter attempt to find a system for controlling the behavior of its users, human beings are too complicated to be algorithmically guided into playing nice.

It’s no secret that Twitter has, in recent months, been forced to face its epidemic of negativity, from Russian bots to the spread of fake news. Obviously, now’s the time for contrition and public vows to fix its business. CEO Jack Dorsey delivered this week with an unexpected tweetstorm laying out his feelings on the situation and his call to action:

Jack acknowledges several complaints about his site and what it’s become in the modern social media economy:

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

We have witnessed abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns, and increasingly divisive echo chambers.

To that end, he’s asked for help coming up with “health metrics” which will gauge the quality of the site’s conversation. He cites as an example (but not a guideline) Cortico’s metrics for “shared attention, shared reality, variety of opinion, and receptivity.” Basically, that measures whether enough people with different opinions are citing the same facts, talking to each other, and listening.

I understand why Jack’s looking for measurable data to work with. It’s got to be tricky trying to find a way to get several million people to play nice with each other without censoring any person or group in particular. While it’s productive for Twitter to try and hold itself accountable, it’s asking for statistical, data-driven help with a problem that’s primarily philosophical in nature.

Suppose Twitter imposes a social media conversational health metric on a particular account. The metric says the account isn’t healthy because it retweets fake news bot tweets, or something similar. What do you do? Ban that account? Would metrics even be limited to certain accounts? Key words? If a site-wide metric said health was suffering, would Twitter go on a purge to find the offending accounts and punish them? What if someone deliberately set out to game the system — how could Twitter control for that?

Human beings sometimes don’t get along. That sounds flip to say when the difficulties of social media cause the arguments to be writ large across the internet. But even if Twitter were to find a metric that told it users were all talking about the same thing and using the right keywords, there’d still be arguments, instigators, and trolls.

I think, if Twitter actually buckled down and set to work removing bots and handling reports of abuse and TOS violations in a more timely manner — either through better automation or a more robust workforce — it might start to see the site get a little cleaner without having to compile data sets and metrics that could possibly be manipulated.

It’s a messy process, punishing people who might turn around and complain that they’re being treated poorly. But that’s a much more understandable way of diffusing an argument between humans than relying on data that might not translate well into real-world actions.

TNW 2018 is almost here, and we’ll be discussing social media and the balance of power in the age of tech. For more info on how you can join the discussion, visit our conference page here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with