Facebook announced Tuesday that the company is banning content that includes explicit praise, support, or representation of white nationalism or separatism, according to a report by Motherboard. It is the latest move by the company to crack down on extremist ideologies that have spread quickly in the age of social media.
Facebook’s decision comes less than two weeks after a terrorist attack on the Al Noor Mosque in Christchurch, New Zealand, in which a gunman opened fire on people. The man streamed the attack, which killed 50 people and injured 50 others, on Facebook Live, and the footage quickly circulated on other websites.
The shooter was an avowed white supremacist, and had posted a manifesto online claiming that immigration and declining birth rates among white people in Europe were a threat to European culture. Conspiracy theories like this have taken root on a number of websites, both popular and not, and in the wake of attacks like the one in Christchurch, people have begun criticizing social media platforms like Facebook and Twitter, platforms that in the past have been accused of taking a laissez faire approach to speech.
After a white supremacist rally in Charlottesville, Virginia in 2017 resulted in violence and the death of counterprotestor Heather Heyer, Facebook adjusted its policies to ban explicit white supremacist posts, but still allowed posts advocating white nationalism and white separatism.
As Facebook has faced increasing scrutiny in the last few years, founder Mark Zuckerberg has been more vocal in explaining the company’s decisions and philosophy, as well as acknowledging ways in which he thinks
“There are really two core principles at play here,” he said. “There’s giving people a voice, so that people can express their opinions. Then, there’s keeping the community safe, which I think is really important. We’re not gonna let people plan violence or attack each other or do bad things. Within this, those principles have real trade-offs and real tug on each other.”
Zuckerberg added: “The principles that we have on what we remove from the service are: If it’s going to result in real harm, real physical harm, or if you’re attacking individuals, then that content shouldn’t be on the platform.”