Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on April 24, 2019

WhatsApp is failing to stop the spread of child abuse videos


WhatsApp is failing to stop the spread of child abuse videos

Facebook-owned WhatsApp continues to be a hotbed for sharing sexual abuse videos in India, according to fresh findings from Cyber Peace Foundation (CPF), a New Delhi-based cyber security and policy think tank that fights against online crimes and warfare.

Despite Facebook’s attempts to clamp down on inappropriate content, the two-week long investigation conducted in March found dozens of WhatsApp chat groups with hundreds of members that share child sexual abuse material.

The groups were identified through a third-party WhatsApp public group discovery app that Google recently banned from Play Store, but can still be sideloaded using the installation files that are available online elsewhere.

Nitish Chandan, a cybersecurity specialist who is also the project manager of CPF, found that members are being solicited using invite links, who are then called on to join a more private group using virtual numbers so as to evade detection.

This is not the first time WhatsApp has come under the scanner for circulation of child sexual abuse material. Late last December, a TechCrunch investigation detailed “how third-party apps for discovering WhatsApp groups include ‘Adult’ sections that offer invite links to join rings of users trading images of child exploitation.”

What’s more, CPF, in a preliminary investigation conducted earlier this year, found at least 50 WhatsApp groups to which hundreds of Indian users are subscribed and used to share child sexual abuse material.

Although WhatsApp has repeatedly reiterated its commitment to users’ safety and zero tolerance for child sexual abuse, the messaging app remains an actively exploited platform for spreading malicious information, hate speech, fake news, and other forms of sexually explicit content.

Complicating the matter further is WhatsApp’s end-to-end encryption of all communications, which makes it harder for law enforcement agencies to monitor such illegal activities.

But there may be some solutions. WhatsApp already leverages Microsoft’s PhotoDNA technology to proactively scan user profile photos for matches to ban both the uploader and all group members.

Yet evidence points that more could be done. In a report that was published in Columbia Journalism Review last August, Himanshu Gupta and Harsh Taneja proposed a metadata-based approach to identify accounts that could be spreading fake news on the platform without giving law enforcement agencies the capability to eavesdrop on all conversations.

By using a mix of metadata information, the cryptographic hash of the multimedia content (which WhatsApp uses for instant forwarding), and phone numbers shared with Facebook, they suggested that WhatApp can track “fake news” even if it can’t actually read the contents of the message.

While content related to child sexual abuse is different from fake news and is another complex engineering problem, it should be no less a priority for a company that runs a messaging app used by over 1.5 billion people across India and the world.

TNW Conference 2019 is coming! Check out our glorious new location, inspiring line-up of speakers and activities, and how to be a part of this annual tech extravaganza by clicking here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with