This article was published on September 4, 2018

Google unleashes its new image detection AI on child abuse content online


Google unleashes its new image detection AI on child abuse content online

Google’s latest attempt to battle the spread of child sexual abuse material (CSAM) online comes in the form of an AI that can quickly identify images that haven’t been previously catalogued.

It’s part of the company’s Content Safety API, which is available to NGOs and other bodies working on this issue. By automating the process of rifling through images, the AI not only speeds things up, but also reduces the number of people required to be exposed to it – a job that can take a serious psychological toll.

Given that the UK’s Internet Watch Foundation (IWF) found nearly 80,000 sites hosting CSAM last year, it’s clear that the problem isn’t close to being contained. Google’s been on this mission for several years now, and it’s certainly not the only tech firm that’s taking steps to curb the spread of CSAM on the web.

It previously began removing search results related to such content back in 2013, and subsequently partnered with Facebook, Twitter and the IWF to share lists of file hashes that would help identify and remove CSAM files from their platforms.

Microsoft worked on a similar project back in 2015, and Hollywood star Ashton Kutcher founded Thorn, an NGO focused on building tech tools to fight human trafficking and child sexual exploitation. One of its projects, dubbed Spotlight, helps law enforcement officials by identifying ads in classifieds sites and forums promoting escort services involving minors.

Google’s new AI goes beyond looking at known hashes, so it’ll hopefully be able to tackle new content without having to rely on databases of previously identified CSAM.

Find out more about the new service, which is available for free to NGOs, on this page.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top