Facebook’s proactive hate speech detection technology has gotten much better, according to a new report.
The social network published the sixth edition of its Community Standards Enforcement Report on Tuesday, August 11. The big takeaway is that the company is getting better at detecting hate speech instances. The report looked at data from the second quarter of this year.
According to the report, Facebook’s proactive detection rate for hate speech is now 95%, compared to 89% from the first quarter of 2020. Facebook said it increased its actions against hate speech content from 9.6 million instances in the first quarter of the year to 22.5 million in the second quarter.
“Thanks to both improvements in our technology and the return of some content reviewers, we saw increases in the amount of content we took action on connected to organized hate on Instagram and bullying and harassment on both Facebook and Instagram,” wrote Guy Rosen, Facebook’s vice president of integrity, in a blog post.
Facebook said that expanding its automation to more languages like Spanish and Burmese allowed it to take action against more hate speech content.
Another area Facebook has been improving on is terrorism content. The platform said it increased its actions against terrorism content from 96.3 million instances in the first quarter to 8.7 million in the second quarter.
To increase these numbers even further, Facebook is experimenting with an army of malicious bots as a way to research anti-spam methods and preempt bad behaviors. The A.I. bots are programmed to simulate extreme scenarios, such as hate speech, to test how Facebook’s algorithms would try to prevent them.
Facebook is clearly trying to be more transparent with its content moderation practices and how it implements them. To take it one step further, the social network is even asking external auditors to conduct an independent audit of future Community Standards Enforcement Reports. Facebook said it would conduct this independent audit next year and have the auditors publish their findings.