Cara CurtisFormer TNW writer
TNW Answers is a live Q&A platform where we invite interesting people in tech who are much smarter than us to answer questions from TNW readers and editors for an hour.
Social media, a tool created to guard freedom of speech and democracy, has increasingly been used in more sinister ways. From its role in amplifying political disinformation in elections, to inciting online violence, and lowering the levels of trust in media — Facebook isn’t just a space to share “what’s on your mind,” and you’d be naive to believe so.
As technology advances, it’s becoming increasingly hard to detect fake news and manipulated content online. To shed some light on the issue, yesterday Samuel Woolley, Program Director of propaganda research at the Center for Media Engagement, hosted a TNW Answers session.
[Read: Study: 98% of kids in the UK can’t tell fake news from the truth]
Woolley is the co-founder and former research director of the Computational Propaganda Project at the Oxford Internet Institute. In his session, Woolley gave insight into topics ranging from how he personally copes with researching the effects of fake news every day to how regulators draw the line between propaganda and genuine political opinion.
Here are the key takeaways from the session:
Woolley started his research into online propaganda after picking up an interest in politics during high school.
“To be honest, most of my work is motivated by the goal of what anthropologist Laura Nader called “studying up” in her famous 1969 article ‘Up the Anthropologist,’” Woolley said. “The basic idea is that we need researchers and thinkers who study people in positions of power. I wanted to study people who thought differently than I did, governments who were manipulating citizens, that kind of thing. I always was fairly obstinate and argumentative, so it worked well for me.”
In his book ‘The Reality Game: How the Next Wave of Technology Will Break the Truth,’ Woolley discusses the key indicators that the next wave of disinformation is moving from social media to new frontiers including virtual and augmented reality platforms, AI-driven virtual assistant systems, and other forms of technology designed in the human image.
When asked what kind of propaganda we should expect in the AR/VR space, Woolley explained: “I think that the disinformation and propaganda we will see in AR/VR will rely, no surprise perhaps, on the multi-sensory nature of these technologies.
“So the content will look quite different. Whereas now, most disinformation is written, with more and more appearing as images and video, the next wave will be interactive and immersive. For instance, I give an example in the new book of how the Chinese Communist Party has tested using VR to test low-level party officials on their knowledge. Basically, the Party gets these people in a VR room and quizzes them on their devotion to the CCP. This seems, to me, a lot more potent than say, a Skype call, because it is in VR.”
We’re living in a time where spreading misinformation is as easy as a Facebook Share or a Retweet. Woolley explained that propagandists are pragmatists use accessible and cheap tools to spread misinformation online. “As machine learning bots become cheaper and easier to make, they will be leveraged for propaganda — we are already seeing this to some extent, but there is still time to act.”
Woolley’s research has made him “cautiously optimistic about the future.” He explained that while it’s true that we face technologically mediated disinformation and propaganda in amounts — and in ways — we’ve never seen before, it’s also true that much of it is rudimentary.
“What we’ve got to look out for, and what I explore deeply in The Reality Game, is the burgeoning use of AI, VR, and deepfake video for the purposes of spreading disinformation. We are ahead of this problem now, as I see it. But we’ve got to act.”
You can read Woolley’s entire TNW Answers session here.
Get the TNW newsletter
Get the most important tech news in your inbox each week.