It’s become popular to criticize mainstream media outfits. The Wall Street Journal, New York Times, CNN, and others have all recently come under fire by Trump cabinet members. Steve Bannon, former Breitbart chairman and current White House Chief Strategist, even went so far as to refer to mainstream media as “the opposition party.”
Trump himself seems to employ similar rationale.
The thinking is simple: if the coverage is unfavorable, it’s fake. If it’s touting the excellent job Trump and his cronies are doing in Washington, then that’s where you can find this mythical creation the White House deems to be a legitimate news source. These sources range from outright fake, to sensational, or thinly-veiled propaganda arms of a single political party — or even an entire country.
Legitimate media doesn’t exist to appeal to a subset of users. It may lean one direction based on leadership and personnel, but even the notoriously liberal New York Times held Barack Obama’s foot to fire when warranted.
Altmedia, altnarratives, and the spread of disinformation
And according to a new study, it could be much worse than that. These ‘alternative media’ sites could be employing “intentional use of disinformation tactics” to weaken the public and make all of us easier to control.
The study, published by Kate Starbird, Associate Professor of Human Centered Design and Engineering at the University of Washington, details three years of data collected by her and her team. The research sought to take a deeper look at how people spread rumors online during crisis events, and the bulk of the study centered around Twitter data.
Over time, Starbird and her team noticed the same type of rumor showing up again and again: that of alternative news sites pitching a substitute narrative for real world events. After the Boston Marathon bombing, for example, over 4,000 tweets claimed the tragedy was, in fact, a false flag operation perpetrated by the US military.
The source? InfoWars.
Research also discovered the bulk of the buzz around these alternative narratives — altnarratives, if you will — came from networked groups of Twitter bots, known as a botnet. The network, and others like it, exist to complete a simple task. In this case, it’s to make stories appear more popular than they are in hopes humans take notice and start sharing the links on their own.
This wasn’t a one time thing.
Again and again, Starbird’s team found evidence of botnet operations for a number of different domains. All of the domains, however, had similarities. Though not all political in nature, many feature co-occurring hashtags — like #falseflag, #obama, #nra, #teaparty, etc. — as well as “pseudo-science theories about vaccines, GMOs, and chemtrails.”
The analysis revealed 24 alternative media domains primarily focused on “distributing conspiracy theories” with another 44 that were attempting to communicate a political agenda.
Analyzing tweets from messages containing these domains, researchers noticed altnarrative tweets were different from other types of rumors. Typical rumors rise quickly and decay just as fast. Altnarrative rumors, the team found, rise more slowly and had a tendency to linger, often for weeks, months or years.
There’s also another interesting commonality: the stories were typically copied from one site to another. Some were original and offered a different perspective or additional evidence, but many were simply plucked from one site and dropped on another in their entirety. The danger here is a public that sees information backed up by multiple sources, without realizing it’s actually the same information.
It gets worse
This brings me to the most terrifying part of the study, which Starbird touches on:
From another perspective, these properties of the alternative news ecosystem — the proliferation of many and even conflicting conspiracy theories and the deceptive appearance of source diversity — may reflect the intentional use of disinformation tactics.
She’s quick to point out that there’s no clear evidence pointing to purposeful disinformation campaigns, or just emergent effects of our current information space, but the latter is no less scary than the former.
And if you’re wondering why it matters, Starbird had this to say:
Their strategic argument is that a society who learns it cannot trust information can be easily controlled.
Simply put, disinformation, if intentional, seeks not to convince readers of the truth contained within the information, but to create “muddled thinking” within society. Once we’re confused we can no longer identify what’s real or fake. And that’s when we start choosing sides based on who we believe is acting in our best interest.