Despite Facebook’s many efforts, bad actors somehow always manage to seep through its safeguards and policies. The social network is now experimenting with a new way to buttress its anti-spam walls and preempt bad behaviors that could potentially breach its safeguards: An army of bots.
Facebook says it’s developing a new system of bots that can simulate bad behaviors and stress-test its platform to unearth any flaws and loopholes. These automated bots are trained and taught how to act like a real person utilizing the treasure trove of behavior models Facebook has acquired from its over two billion users.
To ensure this experiment doesn’t interfere with real users, Facebook has also built a sort of parallel version of its social network. Here, the bots are let loose and allowed to run rampant — they can message each other, comment on dummy posts, send friend requests, visit pages, and more. More importantly, these A.I. bots are programmed to simulate extreme scenarios such as selling drugs and guns to test how Facebook’s algorithms would try to prevent them.
Facebook claims this new system can host “thousands or even millions of bots.” Since it runs on the same code users actually experience, it adds that “the bots’ actions are faithful to the effects that would be witnessed by real people using the platform.”
“While the project is in a research-only stage at the moment, the hope is that one day it will help us improve our services and spot potential reliability or integrity issues before they affect real people using the platform.” wrote the project’s lead, Mark Harman in a blog post.
It’s unclear at the moment how effective Facebook’s new simulation environment will be. As Harman mentioned, it’s still in rather early stages and the company hasn’t put any of its outcomes to use for public-facing updates just yet. Over the last few years, the social network has actively invested and supported artificial intelligence-based research to develop new tools for fighting harassment and spam. At its annual developer conference two years ago, Mark Zuckerberg announced the company is building artificial intelligence tools for tackling posts that feature terrorist content, hate speech, spam, and more.