Celebrate King's Day with TNW 🎟 Use code GEZELLIG40 on your Business, Investor and Startup passes today! This offer ends on April 29 →

This article was published on July 31, 2020

New AI tool detects child sexual abuse material with ‘99% precision’

Safer is already being used by Flickr and Slack


New AI tool detects child sexual abuse material with ‘99% precision’ Image by: Soumil Kumar from Pexels

Child sexual abuse material on the web has grown exponentially in recent years. In 2019, there were 69.1 million files reported to the National Center for Missing and Exploited Children in the US — triple the levels of 2017 and a 15,000% increase over the previous 15 years.

A new AI-powered tool called Safer aims to stem the flow of abusive content, find the victims, and identify the perpetrators.

The system uses machine learning to detect new and unreported child sexual abuse material (CSAM). Thorn, the non-profit behind Safer, says it spots the content with greater than 99% precision. 

Thorn built the tool for businesses that don’t have their own content filtering systems.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

“What we realized was that the [smaller] organizations just didn’t have the resources to identify, remove, and report this type of material,” Caitlin Borgman, Safer’s VP of business development, told TNW.

[Read: This AI needs your help to identify child abusers by their hands]

“There’s inconsistency across the industry as to what to do when you come across it and how to handle it. There’s no record of best practices. It’s costly to build — it can cost over half a million dollars, even for a midsize organization just to stand it up. And then you have to maintain it. There are a lot of gaps in what you can detect. And there’s no real centralized database of CSAM. It’s very siloed and distributed.”

Safer is an attempt to resolve these issues and help tech firms rid their platforms of abusive material.

How Safer works

The system detects abusive content on a platform by searching for digital fingerprints called hashes.

When it finds these hashes, it compares them to a dataset of millions of images and videos of abusive material. If the content hasn’t been previously reported, algorithms trained on abusive material determine whether it’s likely to be CSAM. It’s then queued up for review by the platform’s moderation teams, and reported to the National Center for Missing and Exploited Children, which adds the content to its database.

Among Safer’s early clients is photo-sharing service Flickr. The company recently used the tool to detect a single image of abuse on its platform. A law enforcement investigation that followed led to the identification and recovery of 21 children, ranging from 18 months to 14 years old. The perpetrator is now in federal prison.

In total, Thorn says it took down nearly 100,000 CSAM files during its beta phase. The tool is now available to any companies operating in the US, with current customers including Imgur, Slack, and Vimeo. Next year, Thorn plans to offer the product to overseas firms, after integrating it with their reporting requirements.

The non-profit’s ultimate goal is to eliminate child sexual abuse material from the open web.

“We see a future where every child can just be a child, the internet is a safe environment, and victims who may have been victimized 10 years ago … will not be re-victimized when their images are reshared or resurfaced,” said Borgman. It’s really just disrupting the supply chain. It’s stopping it in its tracks.”

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with