Skip to main content

Governments are stepping in to regulate social media, but there may be a better way

social networking government holding phone
Maskot/Getty Images
Maskot/Getty Images

Criticism for hate speech, extremism, fake news and other content that violates community standards has the largest social media networks strengthening policies, adding staff, and re-working algorithms. In the Social (Net)Work Series, we explore social media moderation, looking at what works and what doesn’t, while examining possibilities for improvement.

Recommended Videos

Social media moderation is often about finding a balance between creating a safe online environment and inhibiting free speech. In many cases, the social media platform themselves steps up to protect users, like with Twitter’s recent rule overhaul, or to keep advertisers, like in YouTube’s recent changes after big advertisers boycotted the video platform. But, in other cases, such as Germany’s new hate speech law and a potential new similar European Union law, moderation is government mandated.

Research suggests the platforms can — and should — do more to regulate content.

Earlier in January, Facebook, Twitter, and YouTube testified before a Senate committee on what steps the platforms are taking to keep terrorist propaganda offline. While the hearing appears to be uncommon, the same groups also testified before Congress on Russian involvement in the 2016 U.S. election.

So should the government regulate social media platforms — or is there another option? In a recent white paper, the New York University Stern Center for Businesses and Human Rights suggested another option based on their research — moderation from the social media companies themselves, with limited government involvement. The report, Harmful Content: The Role of Internet Platform Companies in Fighting Terrorist Incitement and Politically Motivated Disinformation, looks specifically at political propaganda and extremism. While the group says social media platforms shouldn’t be held liable for such content, the research suggests the platforms can — and should — do more to regulate content.

The group suggests that, because social media platforms have already made progress in preventing or removing such content, such moderation is not only possible, but preferable to government interference. Social media platforms have previously leaned towards no moderation at all, which, unlike a newspaper that chooses what news to publish, meant the platforms had no legal liability. Recent laws directed at social media have that changing — in Germany, social networks could pay up to $60 million in fines if hate speech isn’t removed within 24 hours.

The report doesn’t push to make social networks liable for information users share on the platform, but suggests a new category outside the categories of traditional news editors and publishers that don’t regulate content at all. “This long-standing position rests on an incorrect premise that either the platforms serve as a fully responsible (and potentially liable) news editors, or they make no judgements at all about pernicious content,” the white paper reads. “We argue for a third way — a new paradigm for how internet platforms govern themselves.”

social media moderation stats
Statista/Martin Armstrong

The spread of misinformation with a political motivation is hardly new, the group points out, as evidenced by the “coffin handbills” handed out during Andrew Jackson’s campaign in 1828 that accused the future president of murder and cannibalism. At one time, misinformation could potentially be countered with, as Supreme Court Justice Louis Brandeis once said, “more free speech.” The faster speed at which information travels on social media, however, changes that. The top 20 fake news reports on Facebook during the 2016 election had more engagement than the same number of stories from major media outlets, according to BuzzFeed News.

“The problem with turning to the government to regulate more aggressively is that it could easily, and probably would, result in an overreaction by the companies to avoid whatever punishment was put in place,” Paul Barrett, the deputy director at the NYU Center for Business Rights and Human Development, told Digital Trends. “That would interfere with the free expression that is one of the benefits of social media… If the platforms do this work themselves, they can do it more precisely and do it without government overreach.”

If the platforms do this work themselves, they can do it more precisely and do it without government overreach.

The group isn’t suggesting that the government stay out of social media entirely — the legislation to apply the same laws to social media ads that apply to political ads on the TV and radio, Barrett says, is one example of laws that wouldn’t overreach. But, the paper argues, if social media companies step up their efforts against politically motivated misinformation and terrorist propaganda, government involvement wouldn’t be necessary.

The white paper suggests social networks enhance their own governance, continue to refine the algorithms, use more “friction” — like warnings and notifications for suspicious content — expand human oversight, adjust advertising, and continue to share knowledge with other networks to reach those goals. Finally, the group suggests identifying exactly what the government role is in the process.

Barrett recognizes that those suggestions aren’t going to be free for the companies, but calls the steps short-term investments for long-term results. Those changes are, in part, already in motion — like Facebook CEO Mark Zuckerberg’s comment that the company’s profits would be affected by safety changes the platform plans to implement, including an increase in human review staff to 20,000 this year.

The expansion of Facebook’s review staff joins a handful of other changes social media companies have launched since the report. Twitter has booted hate groups, YouTube is adding additional human review staff and expanding algorithms to more categories, and Zuckerberg has made curbing abuse on Facebook his goal for 2018.

“The kind of free speech that we are most interested in promoting — and that the first amendment is directed at — is speech related to political matters, public affairs and personal expression,” Barrett said. “None of those kinds of speech would be affected by an effort to screen out disguised, phony advertising that purports to come from organizations that don’t really exist and actually attempt to undermine discussions on elections. There will be some limitations on fraudulent speech and violent speech, but those are not the types of speech protected by the first amendment. We can afford to lose those types of speech to create an environment where free speech is promoted.”

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
YouTube addresses spam and impersonation with latest updates
A young woman with a smartphone walks past signage of the Youtube logo.

YouTube creators grappling with comment spam and impersonation may finally get some help fighting those issues.

On Thursday, the official YouTube Creators Twitter account tweeted out a thread announcing that the popular video-sharing platform would be issuing a series of updates to help creators defend against spam comments and others' attempts to impersonate their channels.

Read more
YouTube gives its creators a new way to display corrections
Digital Trends' Youtube channel on a Macbook.

YouTube creators are getting a new feature that will allow them to issue corrections and call attention to them in a more conspicuous way.

In a video posted on Tuesday by the Creator Insider channel on YouTube -- a self-described unofficial and "experimental" YouTube channel run by full-time YouTube employees from the "YouTube Creator technical team" -- a new creator feature was announced and it's called Corrections.

Read more
What is Section 230? Inside the legislation protecting social media
social media on phone

A little known piece of legislation called Section 230 is making headlines after President Donald Trump's latest effort to repeal the legislation, demanding that Congress fold that repeal in with another round of stimulus checks, defense spending, and the massive bill that keeps the lights on in Washington D.C. It seems politicians are alwasy struggling to wrap their heads around social media and "Big Tech," a silly term for the technology giants that have defined the modern era.

It's not the first time Section 230 made waves, of course. Trump signed an executive order in May that targeted social media platforms and the content on their sites, aiming to remove the protections of Section 230 in the Communications Decency Act. By repealing Section 230, social networks would be legally responsible for what people post on their platforms. The law that protects speech over the internet has been around for more than 20 years, but has been targeted by politicians of both major parties, including Democratic president-elect Joe Biden.

Read more