There are more than 1.5 billion people using WhatsApp, and sadly, some of them are your relatives spamming you with bad jokes and flowery ‘good morning’ messages. Then there are other kinds of spammers, bots, and people, who try to clutter your message box with automated messages.
The company today released a whitepaper stating that it removes over two million such accounts every month, and 75 percent of that is a result of the app’s strong machine learning algorithm.
In a session with journalists in India, the company’s software engineer, Matt Jones, explained that abusers use many techniques – including custom devices with multiple SIMs and specially coded simulators that masquerade as users – to run multiple instances of WhatsApp.
Through these techniques, people might want to spread click-bait articles or misinformation, like last year’s viral video that spread in India alleging that people might spread click-bait articles, or misinformation, like a viral video that spread across India alleging that the people in it were child kidnappers, and led to multiple lynchings in the country. The company said it’s hit upon ways to prevent accounts from sending out bulk messages and automated messages.
What does WhatsApp do to fight spam?
WhatsApp needs to catch them without breaking encryption and reading the contents of their messages. To do that, it uses what it calls ‘user actions’ that include registration metadata and the rate of sending messages. It can look at these bits of information without decrypting any messages.
Jones said that the company uses three checkpoints where it bans accounts: at registration, during messaging, and in response to negative feedback like reporting. It uses the Facebooks Immune System model, which suggests to perform real-time checks on every read or write action to define abusive behavior and train its machine learning systems.
WhatsApp uses your phone number at registration to verify your coordinates. The machine learning algorithm uses basic information like device details, the IP address of the device, and carrier info to catch malicious accounts.
If a computer network tries to register accounts in bulk, or a phone number similar to the one which was misused recently tries to register several accounts, the system throws them out even before these accounts can send a message. The company says out of the two million accounts it bans every month, 20 percent are caught at registration.
The company’s most interesting spam fighting work happens when bots try to message people. It looks for things like whether an account has a “typing…” indicator, or if it sends 100 messages in 10 seconds within five minutes of registering.
Additionally, if a spam account is sending malicious links, WhatsApp will mark them as suspicious. Last year, several Indian political parties used groups to spread propaganda during numerous state elections. To fight that, WhatsApp introduced a feature to let you report a group and leave it, so the administrators can’t add you back to the group.
Lastly, WhatsApp removes abusers when they’re reported by others. However, it also makes sure that a group of users is not targeting an individual through extensive reporting. To do that, it checks if the phone numbers that are reporting a specific user have ever interacted with it.
Apart from these measures, WhatsApp recently introduced a global limit of forwarding messages to a maximum of five accounts, in order to prevent the spread of spam by humans. The company even makes sure that its algorithm catches abusers on modified APKs (app installation files for Android) of the chat app.
What are WhatsApp’s challenges?
While the Facebook-owned chat app is using machine learning to filter out spam, it’s still encrypting chats end-to-end at its core. As a result, it can’t read the contents of any messages to determine if it’s spam, fake news, or part of a real conversation. The Indian government has asked WhatsApp and other companies to uncover the original creators of such messages. But that would require breaking encryption, and the company’s publicly said it would never do so.
There’s also the issue of abusers constantly creating new groups around the same topic and repeatedly adding the same users to them – rendering the option to report and leave a group less effective than it should be.
The company said it’s working on making these models detect spam faster. It wants to ensure that you’re connected with genuine people and not spammy bots.
“We’re not here to give people and bots a microphone, WhatsApp’s about private messaging,” Jones said during the meeting.
It’s good to know that WhatsApp takes its spam problems this seriously already, but it’s clear that it can do more. With India’s assembly election of 2019 just around the corner, it’s important that the company steps up its efforts to contain spam.