Facebook, YouTube, and Twitter are quick to share stats on how artificial intelligence filters are improving, but the aftermath of last week’s shooting in Christchurch, New Zealand made gaps in the system terrifyingly obvious. Videos shot from the shooter’s point of view were uploaded to social media in numbers at least in the hundred thousand range after a copy of the original was posted to a file-sharing platform called 8chan.
The attack on two mosques in Christchurch left 50 dead and another 50 wounded, according to the authorities. The 28-year-old shooter wore a helmet-mounted camera and livestreamed the shootings in a way that some are saying was “designed for maximum spread on social media.” YouTube’s chief product officer, Neal Mohan, says the shooting was uploaded faster, with more videos, than previous incidents.
Three days later, and social media platforms were still struggling to keep copies of the 17-minute video off the networks. YouTube temporarily erred on the side of caution and disabled the human review part of the process meant to identify videos falsely mislabeled by the platform’s A.I. system as a violation of terms. The change is still ongoing, and YouTubers who believe their videos were miscategorized are encouraged to file for reinstatement. Some search functions also remain disabled.
While YouTube didn’t share exact numbers of the uploads, Facebook says it removed 1.5 million. About 80 percent of those, 1.2 million, were blocked at upload before ever making it onto the platform, while another 300,000 were removed within the first 24 hours after posting. The live video saw fewer than 200 views, the company said, and the video uploaded by the shooter saw around 400 views before being removed. No users reported the video until 29 minutes after the shooter started livestreaming. The platform deleted the suspected shooter’s account on both Facebook and Instagram and banned the user.
Some of social media’s past efforts to keep violence and hate out worked — like the 1.3 million videos that never uploaded to Facebook. YouTube’s previous mistakes that allowed violent videos into the search suggestions didn’t appear to come into play, and users searching for the incident were redirected towards news coverage of the incident instead.
But as the videos numbering in the hundreds of thousands show, the A.I. designed to recognize offending videos isn’t foolproof. Many networks use what’s called hashing to prevent mass uploads by recognizing when the same video is uploaded more than once. But according to The Washington Post, some users have been able to bypass the hashing by shortening the video, adding logos, or even using an effect that made the real-life event look like a video game. While the original video was removed, the networks struggled with “remixes” of the livestream.
Facebook expanded its hashing technology to try to catch more variations on the video, adding audio hashing to the process. Networks that are part of with the Global Internet Forum to Counter Terrorism — which includes Facebook, YouTube, Twitter, and Microsoft — added variations of the video to a database, allowing other networks to prevent the same uploads. Facebook says the group together added around 800 variations of the video to the database.
“This was a tragedy that was almost designed for the purpose of going viral. We’ve made progress, but that doesn’t mean we don’t have a lot of work ahead of us, and this incident has shown that, especially in the case of more viral videos like this one, there’s more work to be done,” Mohan told The Washington Post.
Facebook didn’t comment on why 20 percent of videos that went live on the network weren’t caught at upload. “We continue to work around the clock to remove violating content using a combination of technology and people,” Facebook New Zealand representative Mia Garlick stated in a tweet. “Out of respect for the people affected by this tragedy and the concerns of local authorities, we’re also removing all edited versions of the video that do not show graphic content.”
Reddit and Twitter also removed related content from their platforms, but didn’t share related statistics. “We are continuously monitoring and removing any content that depicts the tragedy, and will continue to do so in line with the Twitter rules,” Twitter Safety tweeted. “We are also in close coordination with New Zealand law enforcement to help in their investigation.”
Updated on March 19 with additional details from Facebook.