Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on August 5, 2020

Deepfakes are the most worrying AI crime, researchers warn


Deepfakes are the most worrying AI crime, researchers warn Image by: EFF Photos

Deepfakes are the most concerning use of AI for crime and terrorism, according to a new report from University College London.

The research team first identified 20 different ways AI could be used by criminals over the next 15 years. They then asked 31 AI experts to rank them by risk, based on their potential for harm, the money they could make, their ease of use, and how hard they are to stop.

Deepfakes — AI-generated videos of real people doing and saying fictional things — earned the top spot for two major reasons. Firstly, they’re hard to identify and prevent. Automated detection methods remain unreliable and deepfakes also getting better at fooling human eyes. A recent Facebook competition to detect them with algorithms led researchers to admit it’s “very much an unsolved problem.”

Secondly, Deepfakes can be used in a variety of crimes and misdeeds, from discrediting public figures to swindling cash out of the public by impersonating people. Just this week, a doctored video of an apparently drunken Nancy Pelosi went viral for the second time, while deepfake audio has helped criminals steal millions of dollars.

[Read: UK ditches visa algorithm accused of creating ‘speedy boarding for white people’]

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

In addition, the researchers fear that deepfakes will make people distrust audio and video evidence — a societal harm in itself.

Study author Dr Matthew Caldwell said the more our lives move online, the greater the dangers will become:

Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.

The study also identified five other major AI crime threats: driverless vehicles as weapons, AI-powered spear phishing, harvesting of online data for blackmail, attacks on AI-controlled systems, and fake news.

But the researchers weren’t overly alarmed by “burglar bots” that enter homes through letterboxes and cat flaps, as they’re easy to catch. They also ranked AI-assisted stalking as a crime of low concern — despite it being extremely harmful to victims — because it can’t operate at scale.

They were far more worried about the dangers of deepfakes. The tech has been grabbing alarm-raising headlines since the term emerged on Reddit in 2017, but few of the fears have been realized thus far. However, the researchers clearly think that is set to change as the tech develops and becomes more accessible.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with