Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on June 28, 2019

The world isn’t ready for deepfakes. Here’s what we need to do.

Worse than purported "fake news," deepfakes are likely to further decrease trust in online media sources.


The world isn’t ready for deepfakes. Here’s what we need to do.

How well do you think you could discern between a genuine video of a politician or celebrity, and one that was generated to mimic their likeness, down to their body language quirks and accent?

Thanks to illusory superiority, you probably think you’re better than average. You probably also think you couldn’t possibly be fooled by a computer program. After all, you just got back from seeing the latest superhero movie, and it was obvious which parts were CGI’d.

But here’s the thing: deepfakes are getting so impossibly convincing, even the best discerners aided with the right technology are having trouble telling the difference between what’s faked and what’s real. This isn’t a parlor trick. In the right hands, deepfakes have the potential to destabilize entire societies—and we’re nowhere near ready to deal with the threat.

How deepfakes work

Credit: Nicole Gray

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

A “deepfake” is a fabricated video, either created from scratch or based on existing materials, typically designed to replicate the look and sound of a real human being saying and doing things they haven’t done or wouldn’t ordinarily do. As with many emerging technologies, it shares a root in pornography, with online users attempting to create realistic videos of celebrities in sexual acts. This is problematic enough, but the future could lead to this technology being used to replicate a sitting U.S. president or another political leader, using them almost like a ventriloquist’s dummy to say and do whatever the creator wants.

Why are these videos so much more convincing than entry-level photoshop efforts? It comes down to the generative adversarial networks (GANs) used in the process. These networks are powered by very sophisticated artificial intelligence (AI) algorithms, working in tandem with two distinct roles; one attempts to create the most convincing video possible (the forger), while the other attempts to determine whether the video it’s watching is a fake (the detective). By combining data sets from these two different perspectives and creating new iterations of the video gradually, a creator can eventually produce a video of lifelike authenticity.

If you’re not convinced, take a look at this fake video of Barack Obama created by Jordan Peele to demonstrate the sheer power of this technology. This was created over a year ago now; the technology has only gotten more powerful from here.

Why we’re not ready

Credit: derpfakes

It might seem alarmist to state that we aren’t ready to deal with the consequences of this technology, but there are several points to keep in mind.

First, we’ve already witnessed the power of fake news firsthand. Determining the real impact of fake news articles on the 2016 presidential election is a complicated matter; some studies are quick to point out that only 10 percent of the population was responsible for 60 percent of the fake news clicks, but this doesn’t account for the impact of the mere exposure effect that can unfold on people merely seeing the headlines, nor does it accurately estimate the impact that 10 percent of people could have on the overall election. After all, it’s estimated that President Trump won by a mere 80,000 votes among 3 states; if fake news had even a tiny impact on the outcome, it could have been enough to change the fate of an entire country, and a powerful one at that.

Deepfakes are fake news articles taken to a new order of magnitude of persuasive power. It’s one thing to suspect that an article was written with an ulterior motive, or to question facts as they appear in a single written article on the web. It’s another to witness, firsthand, a well-known politician talk about their malicious intentions. Detecting deepfakes is already incredibly hard—remember, part of the development process incorporates a “detective” algorithm that must forcibly be fooled—and convincing people they’ve been faked can be even harder. Millions of people who get their news from the web don’t even know that deepfakes exist.

Add to that the fact that deepfakes keep getting cheaper, easier to make, and harder to detect. The capabilities of the technology are accelerating at an unprecedented pace, and it’s getting to the point where ordinary users can create their own fake videos.

Let’s set all those concerns aside for a moment and assume that we’d have a perfect means of detecting fake videos. How could we possibly control the fallout from a demonstrably fake video still being shared across social media? Even knowing it’s fake, watching it could have an impact on how you perceive someone, and social media platforms aren’t doing much to control this type of content. This was made painfully clear in a recent incident involving an obviously faked video of Nancy Pelosi slurring her speech, as if drunk, being left on Facebook despite significant public outcry. Facebook’s Head of Global Policy Management Monika Bickert responded to this by stating: “We think it’s important for people to make their own informed choice about what to believe.” Fake and misleading information doesn’t violate any rules on Facebook, or any other major social media platform, for that matter.

What could we do?

It’s obvious to anyone studying the problem that deepfakes have enormous potential to disrupt and destabilize the world, and that we aren’t currently equipped to deal with the problem. But complaining isn’t productive. Instead, we need to turn our attention to the solutions. So what could we possibly do to prepare for (or even eliminate) this threat?

We need action plans in three main areas to get ready for the coming waves of deepfake propaganda. First, we need to better educate the population that deepfakes exist, and to treat even the most realistic videos they see with a degree of skepticism. Second, we need to develop technology with the potential to detect algorithmically generated video materials—an obstacle that seems exceedingly difficult, considering how GANs work, but it is possible. Third, we need to demand more from social media platforms, where deepfake videos are most likely to have an impact. We can’t accept “it doesn’t violate our terms of service” as a suitable dismissal of this threat. There need to be better features and controls to compensate for these types of content, and we need to get them in place as soon as possible.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with