Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on April 16, 2020

Facebook tests security measures by cloning itself and setting bots free

Researchers are using the simulation to find bugs in the real thing


Facebook tests security measures by cloning itself and setting bots free Image by: Rog01

In a parallel social network hidden from prying eyes, an army of Facebook bots is abusing the platform to find hidden security flaws work out ways to improve security.

The fake users try to scam other bots, post illicit content, and steal personal data in a scaled-down version of Facebook that uses the same code as the real platform.

As Facebook puts it, “the simulation is executed on the real system itself.”

When a bot discovers a vulnerability or bug on the platform, the system automatically recommends changes to Facebook engineers. They can test fixes in the simulation before making updates to the live version.

[Read: How Facebook’s new AI system has deactivated billions of fake accounts]

Facebook revealed the “web-enabled simulation” (WES) in a research paper published on Wednesday. The authors describe a virtual social network populated by bots, which “simulate real-user interactions and social behavior on the real platform infrastructure, isolated from production users.”

Bad Facebook bots

Software simulations are not a new idea, but Facebook has taken an unusual approach to the concept.

While most simulations take place in newly-created models of reality, WES runs on top of the same lines of code as the real platform. Facebook researchers argue this more accurately represents the increasingly complex interactions on the platform.

Facebook trained the bots to simulate human behavior using a method called reinforcement learning, which gives them a reward when they execute an action. They then release the bots to test out different abuses of the platform.

When the system simulates a scam, one bot plays the scammer, and another their victim. The scammer bot is rewarded for finding suitable targets, which are programmed to exhibit the behaviors of a typical dupe.

Other bots try to post illicit content on the shadow Facebook, break privacy rules such as accessing messages, or gather as much data from other fake users. The system simultaneously tries to detect the rule-breakers, searches for ways to stop them, and looks for bugs that the bots have exploited. It can also flag new problems created by software updates, such as a code change that allowed the bots to access private photos.

However, the system is not infallible. There is a risk that the virtual and real worlds collide, as the researchers admit.

“Bots must be suitably isolated from real users to ensure that the simulation, although executed on real platform code, does not lead to unexpected interactions between bots and real users,” they warn.

Let’s hope these devious bots don’t escape from their simulation and enter our own.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with