Skip to main content

You’re probably seeing more social media propaganda, but don’t blame the bots

Bots commonly shoulder the blame for social media propaganda, but a recent study out of the U.K. suggests not only that organized political misinformation campaigns have more than doubled in the last two years, but that bots take second place to human-run manipulation.

The Global Disinformation Order study, conducted by the University of Oxford, found evidence of social media manipulation by a government agency or political party in 70 countries, an increase from 48 in 2018 and 28 in 2017. The study has been collecting data annually since 2017, but suggests political propaganda has leveraged social media for the last decade.

The study, co-authored by Samantha Bradshaw and Phillip N. Howard, tallies up reports from around the world on cyber troops, defined as “government or political party actors tasked with manipulating public opinion online.” While the report focuses on propaganda that can be traced back to a government agency, politician, or political parties, the researchers found formal coordination with private communication firms, and in more than 40% of the countries, civic organizations and citizens. 

Much of propaganda is created by actual people: 87% of the countries use human accounts compared to the 80% of countries using bots. In some cases, the study even identified countries hiring student or youth groups for computational propaganda, including Russia and Israel.

The increase in countries with organized misinformation is likely partially an increase in activity but is also inflated by the increasing ability to detect such activity. “The number of cases we identified was the most surprising thing about this year’s study. Partially, the growth has to do with more state actors seeing social media as a tool of geopolitical power,” Bradshaw, study co-author and researcher at the Computational Propaganda Project, told Digital Trends. “But not all of the cases were new, per se. Many were older examples that were uncovered by journalists and other independent researchers, who are now equipped with better tools and a better vocabulary for identifying instances of computational propaganda in their own country context.”

This year, the researchers also identified a new category of accounts used for manipulation — in addition to human accounts, bot accounts, and “cyborg” accounts that use both, 7% of the countries hacked or stole real accounts to use in their campaigns. Guatemala, Iran, North Korea, Russia, and Uzbekistan were among the countries using hacked or stolen accounts.

More than half of the countries with evidence of political propaganda — 45 out of 70 — used the tactics during the elections. Among those examples, the study suggests, are politicians with fake followers, targeted ads using manipulated media, and micro-targeting.

So what type of information are the campaigns using? Attacking political opposition was the most widespread, in 89% of the countries, followed by spreading pro-government or pro-party propaganda and 34% spreading information designed to create division.

While nearly 75% used tactics like memes, fake news, and videos, the tactics also fell under more covert types of manipulation beyond the media that’s shared. About 68% used state-sponsored trolls to attack opponents, such as journalists and activists. Many also used the reporting tools to censor speech, hoping the automated process will remove the content that doesn’t violate any platform rules. Another 73% percent of the countries flood hashtags in order to make a message more widespread.

Most of the cyber troop activity remains on the biggest social network, Facebook, but the researchers saw an increase in campaigns on platforms focused on photos and video, including Instagram and YouTube. The researchers also saw increased activity on WhatsApp.

The United States ranked among the “high cyber troop capacity” group, which indicates a full-time operation with a big budget focusing on both domestic and foreign propaganda. The report suggests the U.S. uses disinformation, data, and artificial amplification of content from human, bot, and cyborg (or mixed human-bot) accounts. The study also showed evidence the U.S. used all five messaging categories included in the study: Support, attack the opposition, distract, driving divisions, and suppression.

Bradshaw says that social media companies should do more to create a better place to connect and discuss politics. “Determining whether a post is part of a manipulation campaign is no easy task. It often requires looking at broad trends across social media and the conversation that is taking place about a particular topic,” she said.

While Bradshaw says detecting misinformation shouldn’t be left solely to the user, some misinformation can be picked up by looking for accounts that post in multiple languages, conducting reverse image searches, and using free online tools to detect automated accounts. 

The 2019 study highlights changes in political propaganda that existed long before the internet, but has likely been leveraging social media for a decade. The study authors end the report with a question:“Are social media platforms really creating a space for public deliberation and democracy? Or are they amplifying content that keeps citizens addicted, disinformed, and angry?”

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
I paid Meta to ‘verify’ me — here’s what actually happened
An Instagram profile on an iPhone.

In the fall of 2023 I decided to do a little experiment in the height of the “blue check” hysteria. Twitter had shifted from verifying accounts based (more or less) on merit or importance and instead would let users pay for a blue checkmark. That obviously went (and still goes) badly. Meanwhile, Meta opened its own verification service earlier in the year, called Meta Verified.

Mostly aimed at “creators,” Meta Verified costs $15 a month and helps you “establish your account authenticity and help[s] your community know it’s the real us with a verified badge." It also gives you “proactive account protection” to help fight impersonation by (in part) requiring you to use two-factor authentication. You’ll also get direct account support “from a real person,” and exclusive features like stickers and stars.

Read more
Here’s how to delete your YouTube account on any device
How to delete your YouTube account

Wanting to get out of the YouTube business? If you want to delete your YouTube account, all you need to do is go to your YouTube Studio page, go to the Advanced Settings, and follow the section that will guide you to permanently delete your account. If you need help with these steps, or want to do so on a platform that isn't your computer, you can follow the steps below.

Note that the following steps will delete your YouTube channel, not your associated Google account.

Read more
How to download Instagram photos for free
Instagram app running on the Samsung Galaxy Z Flip 5.

Instagram is amazing, and many of us use it as a record of our lives — uploading the best bits of our trips, adventures, and notable moments. But sometimes you can lose the original files of those moments, leaving the Instagram copy as the only available one . While you may be happy to leave it up there, it's a lot more convenient to have another version of it downloaded onto your phone or computer. While downloading directly from Instagram can be tricky, there are ways around it. Here are a few easy ways to download Instagram photos.

Read more