Twitter’s war against QAnon may be paying off.
In a tweet on Thursday, Twitter said that since taking action against the far-right conspiracy theory group QAnon two months ago, impressions on that kind of content have dropped by more than half due to its policies aimed at reducing “coordinated harmful activity” on the platform.
“In July, we began removing Tweets associated with QAnon from Trends and recommendations, and not highlighting them in conversations and Search,” the company said Thursday. “Impressions on this content dropped by more than 50%, decreasing the amount of unhealthy and harmful content on timelines.”
Yoel Roth, Twitter’s head of site integrity, said in a tweet, “Removing harmful content from recommendations and amplification surfaces works. It takes the wind out of the sails of how this content propagates across Twitter.”
He continued, “These are encouraging results, and we’re going to continue to invest in building out our approach.”
QAnon, which originated from the dark web message boards of 4chan, is a conspiracy alleging — without proof — that President Donald Trump is waging a secret battle against Satanic child abusers, most often prominent Democrats and liberal celebrities.
The once-fringe cult has since moved its messaging and baseless rhetoric to popular sites like Facebook, YouTube, and TikTok.
At times, the group has been able to capture nationwide attention by promoting misinformation and manipulating media. The group has reignited 2016’s “Pizzagate” conspiracy, and was responsible for spreading a fake theory about furniture company Wayfair earlier this summer.
Twitter was the first of the major social media companies to take direct, targeted action against the group in July, removing thousands of accounts and pledging to ban QAnon-related hashtags and topics from appearing in its “Trending” section.
In July, we began removing Tweets associated with QAnon from Trends and recommendations and not highlighting them in conversations and Search.
Impressions on this content dropped by more than 50%, decreasing the amount of unhealthy and harmful content on timelines. (2/3)
— Support (@Support) September 17, 2020
Thursday’s announcement that engagement on QAnon-related content has been cut in half is a clear sign that content moderation could help quell the spread of misinformation that could lead to real-world violence.
Twitter’s response to misinformation and hate speech in recent months has been a stark contrast to its previous hands-off policy.
In May, the company took action against Trump for the first time after he shared an inaccurate tweet about mail-in voting, and has since continued to moderate his tweets. The president has on some occasions retweeted known QAnon-related accounts.
However, QAnon content still exists on Twitter, and the company doesn’t plan on banning all of it. Supporters who are familiar with the group’s most-used hashtags and keywords can easily find its content on the platform.