AI & futurism

powered by

This article was published on March 6, 2020

This algorithm could improve emergency responses by removing lies spread on Twitter

The systems filters out the misinformation spread by bots promoting false narratives

This algorithm could improve emergency responses by removing lies spread on Twitter Image by: Ysingrinus
Thomas Macaulay
Story by

Thomas Macaulay

Writer at Neural by TNW Writer at Neural by TNW

An algorithm that filters out misinformation spread on Twitter could help emergency services respond to natural disasters and disease outbreaks.

The system uses AI to distinguish between legitimate reports and bot-generated messages in real-time to create a stream of only genuine information.

It was created by researchers from the University of Adelaide and Aussie analytics agency Data61. The team had initially been building an algorithm that searches social media for signals that a major event is unfolding. But they discovered that the genuine voices on Twitter were being drowned out by false information.

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

The recent bushfires in Australia provided a disheartening example. As the flames fanned across the country’s eastern and southern coasts, the hashtag #ArsonEmergency flooded Twitter feeds, but many of the accounts posting it were trolls and bots trying to misguide the public.

[Read: You’re probably more susceptible to misinformation than you think]

“There was a lot of polarization around this topic,” said Dr Mehwish Nasim, a research engineer at Data 61. “People who were already climate change deniers were tweeting about arson emergency and created an echo-chamber where the spread of this narrative was reinforcing their existing beliefs.”

Her team realized their system would only work if it could remove the misinformation.


To find the perpetrators, the researchers first had to identify the typical behaviors of automated accounts.

Their analysis showed that bots typically had a high tweeting frequency, low topic diversity, and regularly posted the same URLs and hashtags. This helped distinguish them from real users, who are more likely to tweet about different subjects, use a variety of hashtags, include links to a mix of pages, tag other users, and post at a predictable frequency.

They used these insights to teach their algorithm to identify a bot. This allows it to scan through historical tweets to assess a user’s intention, and filter out the posts it doesn’t trust.

There are already numerous tools that search for bots on Twitter, but the researchers believe theirs has two unusual features that distinguish it from the others.

Firstly, it doesn’t need complete access to a user profile, which helps protect data privacy. And secondly, it streams up-to-the-second information, unlike other models that delay access to data by scraping user information.

This combination enables the algorithm to reduce the spread of misinformation at scale and speed.

And when the next bushfires spread, it could help the emergency services to respond more quickly — whether the climate change deniers like it or not.

Get the Neural newsletter

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Also tagged with