Fake news is as old as the written word, and probably older, but it’s technology that’s created the insidious version of fake news we know today. Now, the question is whether technology, and artificial intelligence in particular, can also be the foil that will prevent the eventual fakenewspocolypse.
Making Rome great again
We’ve had fake news for a long time. We just used to call it propaganda and lies. In the first century BC Julius Caesar, the OG populist, was already using his letters, Commentāriī dē Bellō Gallicō (Commentaries on the Gallic War), to directly communicate ‘news’ from the Gallic Wars, or at least his version of it, with the plebeians.
However, Caesar’s fake news was different than the fake news we have today. Caesar’s reasons were clear: self-aggrandizement and political gain. Further, he could only effectively create and disseminate fake news because he had immense power. Still, his distribution was limited given that literacy rates were low and his letters likely had to be read in the public square to be disseminated.
Today, fake news is open source and everywhere. You don’t need to be Julius Caesar or Donald Trump to create and spread it. This is because technology has changed the paradigm of how and why we create and consume news. It started with the printing press – which John Adams blamed for a rise in fake news – and has culminated with the advent of social media, a glut of new online news sources, and new online tools – including bots that fabricate headlines and stories – that make it easier than ever to create and disseminate news, and by extension fake news.
At the same time, technology, and the rise of clickbait and ad driven revenue in particular, have also changed the incentives for creating fake news. Now people make fake news both for political gain and (more often) for financial gain. Those Macedonian teenagers weren’t pumping out fake news because they cared about Donald Trump or Hillary Clinton, it was because they were making $5,000 a month, to $3,000 a day in ad revenue.
As a result of this perfect storm – easy access to distribution and financial incentive – fake news is now rapidly becoming more prolific and influential than the truth. A new study from MIT analyzed 126,000 major contested news stories tweeted by three million users across the span of Twitter’s existence. By every metric, falsehood consistently dominated the truth on Twitter.
It’s likely going to get worse before it gets better – by 2022, people in developed economies could be encountering more fake news than real information.
So, how do we slow the march of fake news in the post-truth era?
One proposed solution is using AI to hit fake news authors where it hurts most, their advertising revenue, and by extension their wallets.
Detecting fakes with AI
The link between fake news and ad revenue has been demonstrated by Harvard University’s Nieman Lab and researchers at Sofia University. Advertising platforms have created an incentive for fake news by allowing anyone to make money from ads even if their content is “fake”. This means advertisers and the advertising industry have a critical role to play in combating fake news.
Or Levi, founder and CEO of AdVerif.ai, says this has led major advertisers, such as Proctor and Gamble, to begin to decrease their digital advertising budgets to protect their brand identity from fake news scandals. This may be why advertising networks across the U.S. and Europe have bankrolled the development of Levi’s start up AdVerif.ai.
Bringing a background in deep learning, Levi and his team at AdVerif.ai started by building a tool that could create news stories. Once they had created that tool, Levi realized that the same principals were likely applicable to a major problem he cared about: fake news. Levi and the AdVerif.ai team took their experience and used the principles of deep learning to create a new AI tool that government agencies, fact checkers, ad networks and advertisers are using to determine automatically which stories are fake and which are real.
“What we tried to do with artificial intelligence was to take this big challenge of detecting fakes. Break it down into smaller tasks — source of the story, what officials are saying about the story, is it being reported by major news industries, and is it something that was debunked in the past and automate that process,” says Levi.
Their AI-based algorithm uses these ‘smaller tasks’ to examine content to spot signs that the story might have false information and then provides clients with a report with scores that assess the likelihood that something is fake news. Their approach seems to be working: Adverif.ai launched last November and is now working with ad networks in the U.S. and Europe and, the company claims, its algorithm has been able to identify fraudulent stories with an accuracy approaching 90 percent.
In addition, the company made it to the Top 25 of Accenture’s year-round Innovation Awards Program in the category ‘security.’ The award will be revealed during the finals on November 2.
Levi’s team made an early decision to focus on serving companies rather than the average user. This was chiefly because while individuals may not care whether all the stories they read are true, advertisers, and by extension ad networks, have their credibility on the line.
Levi believes giving these companies the tools to identify fake news is an important step to fighting this unfavorable development and cutting off revenue for fake news creators.
Creating an AI arms race
Adverf.ai isn’t the only company using AI to help advertisers take on fake news. Google and Facebook are altering their own AI in-house to try to stop fake news websites from earning ad revenue. A host of cybersecurity companies are adding fake-news-fighting AI to their offerings.
So, is this our silver bullet solution?
Well, there are risks to making AI our content evaluator. As AI controls what gets to be seen online, there is the serious risk it can also amplify certain biases in society, and have political ramifications, potentially favoring one viewpoint over another. This is because AI tends to mirror the biases of the humans who create it.
Worse, some also worry that using AI to detect fake news will create an AI arms race, with fake news creators building AI that can outsmart the detection algorithms. This could lead them to develop even more powerful tools for the fake news arsenal.
Ultimately, even with new tools at our disposal, the best way to combat the spread of fake news may depend on people — or at least an alliance between people and machine. The societal consequences of fake news, from political polarization and increased partisanship, to an eroded trust in the media and government, are significant.
You would think this would be reason enough for companies and individuals to forswear fake news. Yet, there are reasons ad networks and advertisers still have an incentive to publish fake news, and by extension not work with companies like AdVerif.ai. This brings us to the biggest factor propelling fake news: people like reading it, making it financially viable.
So, while new powerful tools will help curb fake news, we will also need humans, who create and share fake news, to do their part and become more conscientious about what we read and share.
Accenture Innovation Awards
The 12th edition of the Accenture Innovation Awards (AIA) for innovative concepts and solutions in the Dutch market will take place on November 2, 2018. Would you like to be at the Innovation Summit, the grand finale of the AIA innovation journey for the 40 finalists in eight themes? Check out the program and the application form on their website.
This post is sponsored by the Accenture Innovation Awards.