Facebook is using AI to spot scammers and imposters on Messenger — without reading your chats.
The feature uses machine learning to detect suspicious activity, such as adults sending out loads of friend or message requests to children.
When it spots suspect behavior, it sends an in-app warning to the top of the conversation. This prompts users to block or ignore shady accounts, and provides tips on how to avoid potential scams.
Facebook says the feature doesn’t need to look at the messages themselves. Instead, it searches for behavioral signals such as sending out numerous requests in a short period of time.
[Read: This AI needs your help to identify child abusers by their hands]
That means it will work when Messenger becomes end-to-end encrypted, Facebook messaging privacy chief Jay Sullivan said in a statement:
We designed this safety feature to work with full encryption. People should be able to communicate securely and privately with friends and loved ones without anyone listening to or monitoring their conversations.
Facebook’s safety strategy
Facebook has chosen not to automatically block suspicious accounts that the new feature flags. Instead, it will prompt users to make their own informed decisions.
This approach is similar to the alerts Facebook now sends to users who interact with coronavirus misinformation, which direct them to a myth-debunking webpage.
Stephen Balkam, CEO of the Family Online Safety Institute, said he approved of the strategy:
It’s important to use language that empowers people to make wise decisions and think more critically about who they’re interacting with online. We’re especially glad to see this reflected in the thoughtful approach around safety considerations for younger users.
Facebook started rolling out the new feature on Android in March and will add it to iOS next week.
Get the TNW newsletter
Get the most important tech news in your inbox each week.