
Fake news is a scourge on the global community. Despite our best efforts to combat it, the problem lies deeper than just fact-checking or squelching publications that specialize in misinformation. The current thinking still tends to support an AI-powered solution, but what does that really mean?
According to recent research, including this paper from scientists at the University of Tennessee and the Rensselaer Polytechnic Institute, weâre going to need more than just clever algorithms to fix our broken discourse.
The problem is simple: AI canât do anything a person canât do. Sure, it can do plenty of things faster and more efficiently than people â like counting to a million â but, at its core, artificial intelligence only scales things people can already do. And people really suck at identifying fake news.
According to the aforementioned researchers, the problem lies in whatâs called âconfirmation bias.â Basically, when a person thinks they know something already theyâre less likely to be swayed by a âfake newsâ tag or a âdubious sourceâ description.
Per the teamâs paper:
In two sequential studies, using data collected from news consumers through Amazon Mechanical Turk (AMT), we study whether there are differences in their ability to correctly identify fake news under two conditions: when the intervention targets novel news situations and when the intervention is tailored to specific heuristics. We find that in novel news situations users are more receptive to the advice of the AI, and further, under this condition tailored advice is more effective than generic one.
This makes it incredibly difficult to design, develop, and train an AI system to spot fake news.
While most of us may think we can spot fake news when we see it, the truth is that the bad actors creating misinformation arenât doing so in a void: theyâre better at lying than we are at telling the truth. At least when theyâre saying something we already believe.
The scientists found people â including independent Amazon Mechanical Turk workers â were more likely to incorrectly view an article as fake if it contained information contrary to what they believed to be true.
On the flip-side, people were less likely to make the same mistake when the news being presented was considered part of a novel news situation. In other words: when we think we know whatâs going on, weâre more likely to agree with fake news that lines up with our preconceived notions.
While the researchers do go on to identify several methods by which we can use this information to shore up our ability to inform people when theyâre presented with fake news, the gist of it is that accuracy isnât the issue. Even when the AI gets it right weâre still less likely to believe a real news article when the facts donât line up with our personal bias.
This isnât surprising. Why should someone trust a machine built by big tech in place of the word of a human journalist? If youâre thinking: because machines donât lie, youâre absolutely wrong.
When an AI system is built to identify fake news it, typically, has to be trained on pre-existing data. In order to teach a machine to recognize and flag fake news in the wild we have to feed it a mixture of real and fake articles so it can learn how to spot which is which. And the datasets used to train AI are usually labeled by hand, by humans.
Often this means crowd-sourcing labeling duties to a third-party cheap labor outfit such as Amazonâs Mechanical Turk or any number of data shops that specialize in datasets, not news. The humans deciding whether a given article is fake may or may not have any actual experience or expertise with journalism and the tricks bad actors can use to create compelling, hard-to-detect, fake news.
And, as long as humans are biased, weâll continue to see fake news thrive. Not only does confirmation bias make it difficult for us to differentiate facts we donât agree with from lies we do, but the perpetuation and acceptance of outright lies and misinformation from celebrities, our family members, peers, bosses, and the highest political offices makes it difficult to convince people otherwise.
While AI systems can certainly help identify egregiously false claims, especially when made by news outlets who regularly engage in fake news, the fact remains that whether or not a news article is true isnât really an issue to most people.
Take, for instance, the most watched cable network on television: Fox News. Despite the fact that Fox News lawyers have repeatedly stated that numerous programs â including the second highest-viewed program on its network, hosted by Tucker Carlson â are actually fake news.
Per a ruling in a defamation case against Carlson, U.S. District Judge Mary Kay Vyskocil â a Trump appointee â ruled in favor of Carlson and Fox after discerning that reasonable people wouldnât take the hostâs everyday rhetoric as truthful:
The ââgeneral tenorâ of the show should then inform a viewer that [Carlson] is not âstating actual factsâ about the topics he discusses and is instead engaging in âexaggerationâ and ânon-literal commentary.â ⊠Fox persuasively argues, that given Mr. Carlsonâs reputation, any reasonable viewer âarrive with an appropriate amount of skepticismâ.
And thatâs why, under the current news paradigm, it may be impossible to create an AI system that can definitively determine whether any given news statement is true or false.
If the news outlets themselves, the general public, elected officials, big tech, and the so-called experts canât decide whether a given news article is true or false without bias, thereâs no way we can trust an AI system to do so. As long as the truth remains as subjective as a given readerâs politics, weâll be inundated with fake news.
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.
Get the TNW newsletter
Get the most important tech news in your inbox each week.