Better known as Brusselsgeek, Jennifer is a reporter on EU tech policy and digital rights. Regularly bursting the EU bubble, she was awarded Better known as Brusselsgeek, Jennifer is a reporter on EU tech policy and digital rights. Regularly bursting the EU bubble, she was awarded #1 Tech Influencer 2019 by ZN, was named by Onalytica as one of the world's Top 100 Influencers on Data Security 2016, and was listed by Politico as one of the Top 20 Women Shaping Brussels in 2017. She likes good books and bad films, especially those featuring aliens, swords and time travel – preferably all three.
The European Commission is going to tackle so-called fake news. Which is great. Except it doesn’t know yet what it means by “fake news,” and you can’t really regulate something you don’t understand.
This is its first and biggest problem, but there’s more. The Commission has set up a High-Level Expert Group, which will present its report in the coming weeks, and held a colloquium to discuss the matter. But the embryonic ideas emerging range from bad to… terrible.
But first some good points: At the colloquium on February 27, Silvia Grundmann, Head of the Media and Internet Division at the Council of Europe, gamely attempted to destroy the sexiness of the term “fake news” describing it instead as “information pollution.”
She further broke it down into misinformation (false, but with no intent to harm); disinformation (false, imposter or manipulated content designed to harm); and mal-information (not necessarily false, but leaks, harassment, hate speech, revenge porn, etc.). All this usefully demonstrates how diverse the phenomenon popularly termed “fake news” actually is.
On that last point, it’s worth noting that “fake news” is very often not fake at all. Instead it relies on people’s natural inclination to create narrative: if A and B happened, then we can infer C. Fake news often sets up true A, and true B, and invites readers to make the leap to false assumption C. And many of us do. Because that is how human beings work.
Further complicating the issue, is the growing number of different actors who use fake news — governments, political parties, Macedonian teenagers — and their different motivations, ranging from personal notoriety, “just for lols,” financial gain and political power, to the destabilization of democracy itself.
Some of these are already illegal — after all we do have defamation and slander laws, laws against incitement to violence, against harassment, and anti-hate-speech laws. But how do we define what social media posts are actually illegal, when the definition for illegality varies between countries?
So I sympathize with the European Commission (shock news!), it doesn’t have an easy task. What I cannot get behind are the truly terrible ideas that have been put forward as “solutions.”
Good motivations, terrible ideas
France has proposed banning so-called fake news during the country’s elections, while in Germany, the Network Enforcement Act mandates fines of up to €50 million on social media companies that don’t delete harmful content within 24 hours.
This is a leap from sanctioning content-creation to penalizing the “circulation” of ‘illegal content.’ Worse, it puts the decisions about what is and isn’t harmful into the hands of companies that make money from clicks.
Asking social media companies to reduce fake news clickbait voluntarily is completely unworkable given their business model. You don’t appoint a butcher to defend the interests of vegans!
Forcing them to do it via sanctions is just as bad, and will increase preemptive censorship by tech companies that fear regulatory reprisal as “false positives” soar.
“Fabricated information has always existed. What is new is this contemporary version’s tendency to spread globally at an extraordinary pace,” says Alberto Alemanno, Jean Monnet Professor of EU Law at HEC Paris and author of Lobbying for Change. Given this, many platforms will turn to automated filtering systems to try to keep up.
The problem with asking the machines to do it, is that they will get it wrong. Automated filtering for things like “community standards” and copyright are already widespread and we all know how accurate they are… i.e. not very. We should not hand over the keys to something as important as our democratic principles, to systems that cannot distinguish Madonna from a 4-year-old, or porn from war photography.
Fake news is a cultural problem, it needs a cultural solution — over-reliance on technical solutions is a short term fix that could actually make matters a lot worse.
Another suggestion that comes up again and again is to swamp fake news with the truth.
“Instead of killing the story, you surround that story with related articles so as to provide more context and alternative views to the reader. In other words, the social platform hosting the disputed news alters the environment in which that story is presented and consumed.
That’s exactly what Facebook is doing with its newly released feature offering ‘Related Articles’ directly beneath the disputed story,” explains Alemanno.
While the principle behind this idea seems reasonable, determining which stories are fake news and in need of “alternative views” is problematic.
And therein lies the problem with automated approaches. By presenting “alternative views” to all news stories, platforms could unwittingly present fake news as an “alternative” to genuine reporting and so actually INCREASE the amount of fake news.
Constantly referring to “alternatives” to news also promotes the mindset that everything is relative and there are no absolute facts… unless we want public authorities to police the media as Orwellian ‘Ministries of Truth’. (That was a joke!)
More humans in the mix to counter fake news is a good idea. But this must be done carefully. Sometimes fake news stories spread more rapidly once they are denounced as fake as people click to see what all the fuss is about.
One anti-fake-news tool that seems to be getting the balance right is the Full Fact organization, funded by eBay founder Pierre Omidyar and investor George Soros. “In the algorithmic war for truth, this seems like a much better approach than that adopted by YouTube — use technology to flag up bad things more quickly than people can, but then use those results to help actual people make sensible decisions,” says tech journalist and author of Control Shift, David Meyer.
Many actors have focused on the need to support ethical and responsible reporting. How to do that is the tricky part. The behavioral advertising model that most news organizations have become hooked on, only amplifies filter bubbles and echo chambers as it tracks and categorizes users. By creating this ecosystem, this business model has laid the groundwork for the fake news and clickbait phenomena. To tackle fake news ethical reporting must move away from this model to more sustainable funding.
At the Commission’s colloquium, some speakers likened fake news to misleading advertising. But there is a fundamental difference between advertising goods/services and the fake new phenomenon that pushes a political position. We have consumer laws to protect us from faulty goods, in practice, we have very little to protect us from erosion of democracy!
The Center for Democracy and Technology pointed out in its response to a public consultation on the subject, “news organizations should report the facts as objectively and accurately as possible. Social media companies should provide data that allows researchers to assess the problem and the effectiveness of the measures taken to counter [fake news].” Well OF COURSE they should! It’s terrifying that this even needs saying.
Get the TNW newsletter
Get the most important tech news in your inbox each week.