As the world grapples with the coronavirus outbreak, the overlords of the internet’s biggest communication channels have been busy waging a different war: One against misinformation. The COVID-19 epidemic, which has so far infected nearly 98,000 people in 86 countries, has rapidly sparked yet another “infodemic” for online platforms like Facebook and YouTube, inundating them with an around-the-clock avalanche of misleading ads, fake news, conspiracy theory posts, and a whole lot more.
(For the uninitiated, an infodemic is a large amount of information about a problem that is viewed as being a detriment to its solution.)
In the last few weeks, tech companies have introduced a range of initiatives to stem this spurt of misinformation. Google and Apple began actively cracking down on coronavirus-related apps that are not from official health organizations, for example, and blocked search results for similar queries on their app stores.
Facebook, Google, and Twitter also ramped up their fact-checking efforts to flag posts that feature hoaxes, conspiracy theories, and other misinformation. These sites are restricting exploitative advertisements that sell and promote, for instance, cures and health products for coronavirus.
Searches on these social platforms now also automatically pull up information from verified sources at the top, as well as prompts for seeking medical assistance whenever someone looks up coronavirus-based keywords. Facebook and Twitter are also offering the World Health Organization free ad credits to spread awareness.
“We know from previous emergencies — and from places where there have already been outbreaks of coronavirus across the world — that in times of crisis, people rely on communication tools even more than usual. That means that as well as helping people access information, we have a responsibility to make sure our services are stable and reliable to handle this load and we take that seriously too,” Facebook CEO Mark Zuckerburg wrote in a lengthy Facebook post.
Misinformation finds loopholes
While these efforts have proved an effective first defense against the coronavirus infodemic, misinformation and bad actors have still managed to seep in through either systematic loopholes or oversights.
Google and Apple’s app store search protections don’t take into account common misspellings and generate unofficial listings even if you modify a letter. Searching for “coranovirus” on the iOS App Store, for instance, will produce a mix of official health apps, an anonymous non-rated “Corona-Virus Chat” app, a defunct crowdfunding service, games, and more.
The same shortcomings plague the seemingly nerfed search results on social networks like Facebook. A query for “coronavirus” will return exclusively with official, verified links. But when I tried “coronovirus hoax” or checked for a fake news post that has gone viral in India, I was able to easily browse hoaxes and other misleading content.
Facebook is oddly not blocking coronavirus hashtags either. On Instagram, once you click past the initial search pop-up, you can go through every post — real or fake — unfiltered. In fact, Instagram influencers are exploiting the coronavirus hashtag to gain followers through fashionable mask-equipped selfies and portraits.
Deceptive ads
What’s more, some companies have been found to be altering the language of their ads to sell face masks and hand sanitizers. “Stay healthy this flu season,” says an ad from one face-mask manufacturer whose earlier coronavirus promotions were pulled from Facebook. A beer company in India was similarly found running ads (without the coronavirus keyword, of course) to sell a “military-grade pollution mask.”
Google itself has been caught showing several anti-coronavirus ads across the web. YouTube has quickly transformed into a hotbed of coronavirus conspiracy theories and rumors. At the time of writing, one of the more active channels that’s regularly publishing deceptive coronavirus-related clips and is being circulated on various Facebook groups hasn’t been taken down.
Despite its much smaller reach, Twitter has been unable to keep all coronavirus misinformation out of its social network. Just a day ago, a tweet that wrongly claimed antibacterial sanitizers are ineffective against viruses amassed about a quarter of a million likes and over 80,000 retweets before it was deleted.
Time and again, tech companies have proved they have little control over the expansive digital empires they own — and the coronavirus infodemic is just yet another example of it. Their efforts so far have largely struggled to debunk viral hoaxes and misinformation, and it’s unlikely that will change anytime soon. In January, Facebook was running anti-vaccine ads in spite of banning them, which suggests the social network still hasn’t figured a better approach to addressing these grave issues.
Perhaps, these tech giants could tear a page out of Pinterest’s book; that site has put its foot down and shunned all user posts related to coronavirus from the search page — a method that has yielded successful results in the past for fighting vaccine misinformation.
More importantly, these slip-ups highlight how tech companies have refused to take more stringent measures during such catastrophic events, allowing misinformation to prevail one way or another. Over the last couple of years, Facebook, Google, and the rest have been in similar positions and undermined what’s at stake. Given the role they play today in communities and economies across the globe, it’s time for them to step up and understand the responsibilities they carry on their shoulders.