Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on July 23, 2020

Facebook built a powerful AI model to simulate entire social media networks in action

How can I get on the Facebook that only has bots?


Facebook built a powerful AI model to simulate entire social media networks in action

When it comes to live-fire high-wire acts in the tech industry, there can be few endeavors more daunting than executing a security update to a software platform hosting more than 2.6 billion users.

But that’s exactly what Facebook does every time it rolls out an update. Sure, it mitigates the potential for terror by making the changes in batches and conducting an incredible amount of internal testing. But at the end of the day, you never know precisely how any given change could upset the delicate user balance that keeps Facebook on people’s screens.

Lucky for Facebook, the company’s AI team recently came up with a pretty clever way to make sure those software updates and tweaks don’t screw up its platform and cost it any users: they built a fake Facebook-scale social media network full of bots to test things out on.

Per a company blog post:

To improve software testing for these complex environments — particularly in product areas related to safety, security, and privacy — Facebook researchers have developed Web-Enabled Simulation (WES). WES is a new method for building the first highly realistic, large-scale simulations of complex social networks.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Basically, it’s a copy of Facebook that’s filled with bots. The bots are trained by AI models representing human social media capabilities. In essence, the bots can add friends, like posts, and generally do anything a person could do on a given social media platform.

These bots aren’t like the ones you’re used to seeing on Twitter (shout out to @infinite_scream on Twitter) that exist simply to respond when a text trigger occurs. They’re meant to simulate the experience of using a social media site.

According to Facebook:

We’ve used WES to build WW, a simulated Facebook environment using the platform’s actual production code base. With WW (the name is meant to show that this is a smaller version of the World Wide Web, or WWW), we can, for example, create realistic AI bots that seek to buy items that aren’t allowed on our platform, like guns or drugs. Because the bot is acting in the actual production version of Facebook, it can conduct searches, visit pages, send messages, and take other actions just as a real person might. Bots cannot interact with actual Facebook users, however, and their behavior cannot impact the experience of real users on the platform.

Quick take: This seems like a very intelligent way to determine whether or not a security function or new user feature is operating properly without risking a broken user experience in the human-facing version of the production code. I expect these simulations will become the status quo for social media networks.

Realistically though, the simulation itself solves the biggest problem Facebook has: human users. What I wouldn’t give for an invitation to the bot-only version.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with