Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on August 22, 2019

Researchers propose aggressive new method to eradicate online hate groups


Researchers propose aggressive new method to eradicate online hate groups

A team of researchers from George Washington University and the University of Miami recently published the results of a study to determine the ecological makeup of online hate groups. Based on their findings, they’ve come up with four strategies to disrupt these groups and, hopefully, eradicate them once and for all. Not all heroes wear capes.

Online hate groups are a scourge that, according to the researchers, thrives due to the formation of networked “cluster groups.” Traditional studies have focused on the individuals that comprise hate groups or the ideologies they support, but this study focused on the “network of networks” that binds hate groups across the globe regardless of their geographic location.

Credit: Johnson et. al,

The results of their comprehensive study into hate groups on Facebook and VKontakte indicates that, due to these clusters, hate groups are incredibly hard to weed out. According to a Nature journal review of the team’s paper:

Johnson et al. show that online hate groups are organized in highly resilient clusters. The users in these clusters are not geographically localized, but are globally interconnected by ‘highways’ that facilitate the spread of online hate across different countries, continents and languages. When these clusters are attacked — for example, when hate groups are removed by social-media platform administrators the clusters rapidly rewire and repair themselves, and strong bonds are made between clusters, formed by users shared between them, analogous to covalent chemical bonds.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The research paper takes a deep dive into the nature of these clusters at the mathematical level and demonstrates how – without knowing any personal data about the members of these groups – weaknesses in the ecology of these hate groups could be exploited to eliminate them.

To this end, they’ve come up with four distinct policies (as outlined in the aforementioned Nature review) that could, if executed properly, strike at the very core of what allows hate groups thrive – their ability to exploit the rules of the platforms they exist on in order to stay one step ahead of admins and moderators.

Policy One:

The authors propose banning relatively small hate clusters, rather than removing the largest online hate cluster. This policy leverages the authors’ finding that the size distribution of online hate clusters follows a power-law trend, such that most clusters are small and only very few are large.

Here the researchers suggest we snip out smaller groups before they absorb or combine with other groups rather than focusing our efforts on taking down the largest groups.

We’d all like to believe that if you chop off its head the beast will die, but that simply isn’t true. Hate groups aren’t comprised of drooling sycophants marching behind their fearless leaders like fantasy orcs – they’re made up of people who generally think they’re in on a secret that the ‘normies’ don’t understand. If you remove the biggest offenders, it creates a power vacuum that sucks smaller groups towards a single point – essentially galvanizing them.

Executing this policy at the platform level could look like focusing large-scale bans (entire groups) on outlier groups gaining popularity while attacking larger hate groups with myriad individual bans for specific posts.

Policy Two:

Banning a small number of users selected at random from online hate clusters.

This one’s a bit more difficult to imagine as an action item. Conventional wisdom states that moderators should act on anything that qualifies as hate speech the moment it happens. The idea is that the sooner the speech or the person saying it is banned, the less opportunity it has to reach and radicalize others. But that’s a bit naive right?

Here, I believe the researchers could be suggesting that banning members of online hate groups at random, rather than simply conducting massive sweeps, serves as a disruptive force with greater long-term payoff. The people invested in hate groups consider bans to be both a minor, temporary inconvenience and a badge of honor — random bannings could make it harder for hate groups to rally behind banned members.

Policy Three:

Platform administrators promote the organization of clusters of anti-hate users, which could serve as a ‘human immune system’ to fight and counteract hate clusters.

The current paradigm for platforms that host both hate and anti-hate groups is that AI-powered solutions can’t really tell the difference and most companies are scared-stiff that they’ll end up looking like they’re supporting a left-wing hate group against a right-wing hate group. So they tend treat both hate and anti-hate groups the same. It’d take one helluva courageous set of executives to start promoting groups that exist solely to speak out against hate groups on their own platform.

This feels more like a grassroots power-to-the-people kind of thing. But creating a mechanism for like-minded anti-hate groups to form their own clusters could have a spillover effect to counteract non-targeted recruitment efforts by extremist groups.

Policy Four:

Platform administrators introduce an artificial group of users to encourage interactions between hate clusters that have opposing views, with a view to the hate clusters subsequently battling out their differences among themselves.

Now we’re talking. We’ve all heard that you can’t fight fire with fire, but whoever coined that term probably never tried to explain how hate group ideologies like “red pilling” (convincing a mark that your particular brand of extremist ideology is right and everyone else just doesn’t get it or is in on the conspiracy) are straight out of the generic-brand cult followers’ handbook.

Policy four says that the administrators of the hosting platform should create seek-and-destroy groups that target hate groups by exposing them to each other’s differing viewpoints. I liken this to undercover instigators who drive hate groups towards discussions of their differences rather than letting them cower together in support of each other’s similarities.

It’s important to keep in mind that the researchers have done the math here. This isn’t a group of activists compiling sources, it’s the actual scientists studying boatloads of data representing the exact nature of specific hate groups’ networks and how this “network of networks” is an incredibly complex ecosystem.

What their work tells us is that the online hate ecosystem is not a giant balloon filled with vitriol; we cannot poke it with the needle of truth and expect it to shrivel. Getting rid of hate groups is a matter of killing the roots beneath the surface, not just throwing away the rotten fruit.

Of course, for the researchers part, they advise caution to platforms and individuals considering deploying these policies. As Nature’s Noemi Derzy writes:

The authors recommend caution in assessing the advantages and disadvantages of adopting each policy, because the feasibility of implementing a policy will rely on available computational and human resources, and legal privacy constraints. Moreover, any decisions about whether to implement one policy over another must be made on the basis of empirical analysis and data obtained by closely monitoring these clusters.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with