Skip to main content

Here’s what social media giants are doing to keep extremism off your screen

senate hearing terrorism and social media extremist content january 2018 ios reading list header
Image used with permission by copyright holder
Social media is a powerful tool for groups engaged in terrorist activities. The extremist content they post have sparked widespread changes across social networks. But, are those changes enough? That’s the question representatives from Facebook, Twitter, and YouTube addressed this week, speaking before the U.S. Senate Committee on Commerce, Science, and Transportation in a hearing in Washington, D.C.

The hearing was designed to look into the social networks’ current efforts into curbing extremist content, opening up a discussion on tech companies’ role in stunting the spread of online propaganda. While the companies have previously testified on Russian interference in the U.S. election, this hearing was the first time the companies spoke to the commerce committee on extremist content.

Recommended Videos

All three networks demonstrated a significant increase in the number of content removed from their respective platforms, as well as preventing the information in the first place. In some cases, the networks’ efforts overlap, including the Global Internet Forum to Counter Terrorism for information sharing, while a database of more than 40,000 “hashes” helps keep content recognized on one network, off another.

Facebook

Facebook’s head of Product Policy and Counterterrorism, Monika Bickert, said that Facebook is now able to remove 99 percent of ISIS and Al Qaeda-related posts before reaching a human flagger, thanks largely to machine learning; Facebook’s AI platform looks through image, video, and text material. The company is also working to teach the system how to recognize posts that support a terrorist organization (rather than generating false positives on posts condoning the behavior, for example).

For Facebook, AI is also being used to prevent some content uploads. Image matching prevents other accounts from uploading videos previously removed by the company. The company also works with experts “to track propaganda released by these groups and proactively insert it into our matching systems,” Bickert wrote in a prepared statement.

Facebook also looks for “clusters” or related Pages, groups, posts, and profiles tied to the removed account. The social network is also improving efforts in keeping users previously removed from creating a new account.

Facebook has already added 3,000 people to the review team and this year will expand to a total of 20,000 people working to identify all content that violates the community standards, including extremist content. Another 180 people, Bickert said, are trained specifically in preventing terrorist content.

At the same time, Facebook is working to further “counterspeech,” or content that fights against extremism and other hateful posts.

“On terrorist content, our view is simple: There is no place on Facebook for terrorism,” Bickert said. “Our longstanding policies, which are posted on our site, make clear that we do not allow terrorists to have any presence on Facebook. Even if they are not posting content that would violate our policies, we remove their accounts as soon as we find them.”

Twitter

Twitter’s director of Public Policy and Philanthropy, Carlos Monje Jr., said the platform has now suspended more than one million accounts for terrorism since mid-2015 — including 574,070 accounts just last year, a jump from the more than 67,000 suspensions in 2015. A big part of that increase is the technology used to detect those accounts, which caught one-third of the accounts in 2015 but is now responsible for 90 percent of the latest suspensions.

“While there is no ‘magic algorithm’ for identifying terrorist content on the internet, we have increasingly tapped technology in efforts to improve the effectiveness of our in-house proprietary anti-spam technology,” Monje said. “This technology supplements reports from our users and dramatically augments our ability to identify and remove violative content from Twitter.”

Extremist content was part of Twitter’s rule overhaul late last year prompted in part by #womenboycotttwitter. Those expanded rules went beyond Tweets to include handles, profile images and other profile information.

On a different note, the platform is also working to prevent election misinformation and will soon show users if they viewed that propaganda — along with donating money from those ads to conduct additional research. While Twitter has already shared updates designed specifically for political ads, verifying all state and federal candidates is part of those changes as well.

YouTube

Juniper Downs, YouTube’s director of Public Policy and Government Relations, said machine learning now removes 98 percent of “violent extremism” videos, up from 40 percent a year ago. Around 70 percent is removed within eight hours and half in under two, Downs said.

Along with the expanded software, YouTube has also added additional organizations to the Trusted Flagger program, including counter-terrorism groups. Within parent company Google itself, the number of staff working with those videos in violation will grow to 10,000 this year. This year will also bring a transparency report on flagged videos.

For videos that fall in a more gray area without an outright violation, YouTube has already announced these types of videos won’t receive monetary compensation or be part of the recommended videos, along with removing options for comments. Like Facebook, counter-speech is also part of the initiative, including the Creators for Change program.

“No single component can solve this problem in isolation,” Downs wrote in her prepared statement. “To get this right, we must all work together.”

Moving forward

While the session has been described as “mostly genial” with each platform reporting higher numbers of removed content and accounts, Clint Watts, a Robert A. Fellow for the Foreign Policy Research Institute, suggested that social networks can do more by reconsidering anonymous accounts and eliminating non-human bot accounts or requiring a CAPTCHA, while federal regulations for political ads should also be extended to social media.

“Social media companies realize the damage of these bad actors far too late,” Watts wrote in a prepared statement. “They race to implement policies to prevent the last information attack, but have yet to anticipate the next abuse of their social media platforms by emerging threats seeking to do bad things to good people.”

A video of the hearing is publicly available from the committee’s website, including prepared statements from each network.

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
I paid Meta to ‘verify’ me — here’s what actually happened
An Instagram profile on an iPhone.

In the fall of 2023 I decided to do a little experiment in the height of the “blue check” hysteria. Twitter had shifted from verifying accounts based (more or less) on merit or importance and instead would let users pay for a blue checkmark. That obviously went (and still goes) badly. Meanwhile, Meta opened its own verification service earlier in the year, called Meta Verified.

Mostly aimed at “creators,” Meta Verified costs $15 a month and helps you “establish your account authenticity and help[s] your community know it’s the real us with a verified badge." It also gives you “proactive account protection” to help fight impersonation by (in part) requiring you to use two-factor authentication. You’ll also get direct account support “from a real person,” and exclusive features like stickers and stars.

Read more
Here’s how to delete your YouTube account on any device
How to delete your YouTube account

Wanting to get out of the YouTube business? If you want to delete your YouTube account, all you need to do is go to your YouTube Studio page, go to the Advanced Settings, and follow the section that will guide you to permanently delete your account. If you need help with these steps, or want to do so on a platform that isn't your computer, you can follow the steps below.

Note that the following steps will delete your YouTube channel, not your associated Google account.

Read more
How to download Instagram photos for free
Instagram app running on the Samsung Galaxy Z Flip 5.

Instagram is amazing, and many of us use it as a record of our lives — uploading the best bits of our trips, adventures, and notable moments. But sometimes you can lose the original files of those moments, leaving the Instagram copy as the only available one . While you may be happy to leave it up there, it's a lot more convenient to have another version of it downloaded onto your phone or computer. While downloading directly from Instagram can be tricky, there are ways around it. Here are a few easy ways to download Instagram photos.

Read more