Facebook recently tried out a new algorithm that elevated comments with the word ‘fake’ in it in an effort to weed out fake news. Users were not pleased.
The test, which has already finished, was the social network’s latest attempt to fight fake news, and resulted in users being more annoyed than impressed. But it’s important to notice that Facebook is sticking to its word and continues to search for ways to eliminate fake news from the social network once and for all.
— joanna barrett (@jobrigitte) October 23, 2017
Facebook has long been accused of promoting misinformation – especially since the 2016 election. After all, fake stories reached a larger audience than actual news during the final months of the US elections. But for all the hate it gets, the the company has taken various steps to fight the epidemic.
As early as December 2016, Facebook introduced tools for users to report any post they considered a hoax, as well as the option to flag stories as disputed, by working with third-party fact checking organizations, such as Poynter. The website also started showing related articles below stories for users to have a wider angle on the story, and be able to identify themselves whether the story was a hoax.
The social media network announced in April this year that it would punish Facebook Pages considered to be spreading fake news by taking away economic incentives. Facebook stated then that it would make it as difficult as possible for those posting Fake News to buy ads on the network, as well as as applying machine learning to help detect fraud and “inauthentic spam accounts.” This was accompanied by a tool which appeared on the top of user’s newsfeed’s to help them spot fake news on the daily.
As recently as August, Facebook took further steps by directly blocking ads from Pages that repeatedly spread Fake News — one of the most forward attempts so far. And in September, amidst revelations of Russian interference in the US elections, the social network announced it was deleting further accounts which it identified a relation to this, while still claiming to be using artificial intelligence and sending more stories to fact-checkers.
Despite steps forward, the fight against fake news is seemingly never-ending. Facebook and others have tackled the issue, but the problem remains in those who share the stories in the first place.
Sometimes Facebook will make mistakes along the way, as with the aforementioned test to highlight comments with the word ‘fake’ in it. But even when it gets it wrong, it’s important that the company is at least trying.
The steps that are currently being taken are necessary and not only make the user more aware, but gives them the chance to participate in trying to solve the problem.
Tomorrow marks a year since the 2016 elections, and fake stories have died down slightly, but we’ll have to wait for another political turmoil to see how much these precautions have taken effect. Until the problem is solved, you can read this guide on how to avoid getting tricked by Fake News.