Social media giants are once again being asked to testify before the U.S. Congress — this time, about “extremist propaganda.” Facebook, Twitter and YouTube will each have representatives testify before the Senate Commerce Committee next week, on January 17. The hearing, called “Terrorism and Social Media: #IsBigTechDoingEnough,” is expected to look into how the platforms handle extremist content.
The Commerce Committee’s leader, Sen. John Thune, said the meeting is designed to discuss how social media platforms are handling extremist propaganda and what the tech giants are doing to prevent the spread of those posts. According to Recode, the networks’ handling of hate speech, racism, fake news and other abusive content could also become part of the discussion.
Facebook’s global policy management head, Monika Bickert, Twitter’s director of public policy and philanthropy, Carlos Monje, and YouTube’s global head of public policy and government relations Juniper Downs will present testimony to the committee.
Questions on how social networks handle extremist content have led to several changes within the networks over the past two years, but the hearing will discuss if those steps are enough. Removing extremist content isn’t a U.S.-only issue either — the European Union is currently running a voluntary Code of Conduct and calling for networks to quickly detect and remove hate speech. In 2016, the three social networks, along with Microsoft, formed a group to build a shared database of extremist content, with the goal of making those types of posts easier to remove across all platforms.
YouTube has implemented several changes over the last year after several big brands pulled their ads upon discovering ads were running with extremist videos and other forms of hate speech. In August, the platform said that advances to its A.I. algorithms led to 75 percent of those videos getting booted before a human user could even flag them, while videos that fall into a gray area not quite against community guidelines began seeing penalties.
Twitter faced growing concern in 2016 and locked out 230,000 accounts for extremist content in August, followed by another removal of around 377,000 accounts over a period of six months. The platform recently started removing the blue verification badge from some users after facing criticism for giving a known white supremacist the badge.
Last year, Facebook shared insight into how the platform tackles extremist content using both A.I. and staff. A.I., for example, can identify duplicate videos that Facebook has already removed, preventing another group from sharing the same removed content, while another algorithm looks for keywords in text. The company’s review staff is expected to increase to 20,000 this year and CEO Mark Zuckerberg has made fixing abuse on the platform his personal goal for 2018.
According to The Hill, it’s rare to see representatives from the social media giant testifying in Washington, yet all three groups also testified concerning Russian interference in the U.S. election. A slew of recent events could lead to legislation catching up to social media technology, ranging from new proposals for managing political ads in the U.S. to a now active law in Germany requiring removal of hate speech.