Skip to main content

Social (Net)Work: What can A.I. catch — and where does it fail miserably?

Best social media management tools for small businesses
Panithan Fakseemuang/123RF
Criticism for hate speech, extremism, fake news, and other content that violates community standards has the largest social media networks strengthening policies, adding staff, and re-working algorithms. In the Social (Net)Work Series, we explore social media moderation, looking at what works and what doesn’t, while examining possibilities for improvement.

From a video of a suicide victim on YouTube to ads targeting “Jew haters,” on Facebook, social media platforms are plagued by inappropriate content that manages to slip through the cracks. In many cases, the platform’s response is to implement smarter algorithms to better identify inappropriate content. But what is artificial intelligence really capable of catching, how much should we trust it, and where does it fail miserably?

“A.I. can pick up offensive language and it can recognize images very well. The power of identifying the image is there,” says Winston Binch, the chief digital officer of Deutsch, a creative agency that uses A.I. in creating digital campaigns for brands from Target to Taco Bell. “The gray area becomes the intent.”

A.I. can read both text and images, but accuracy varies

Using natural language processing, A.I. can be trained to recognize text across multiple languages. A program designed to spot posts that violate community guidelines, for example, can be taught to detect racial slurs or terms associated with extremist propaganda.

mobile trends google assistant ai
Image used with permission by copyright holder

A.I. can also be trained to recognize images, to prevent some forms of nudity or recognize symbols like the swastika. It works well in many cases, but it isn’t foolproof. For example, Google Photos was criticized for tagging images of dark-skinned people with the keyword “gorilla.” Years later, Google still hasn’t found a solution for the problem, instead choosing to remove the program’s ability to tag monkeys and gorillas entirely.

Algorithms also need to be updated as a word’s meaning evolves, or to understand how a word is used in context. For example, LGBT Twitter users recently noticed a lack of search results for #gay and #bisexual, among other terms, leading some to feel the service was censoring them. Twitter apologized for the error, blaming it on an outdated algorithm that was falsely identifying posts tagged with the terms as potentially offensive. Twitter said its algorithm was supposed to consider the term in the context of the post, but had failed to do so with those keywords.

A.I. is biased

The gorilla tagging fail brings up another important shortcoming — A.I. is biased. You might wonder how a computer could possibly be biased, but A.I. is trained by watching people complete tasks, or by inputting the results of those tasks. For example, programs to identify objects in a photograph are often trained by feeding the system thousands of images that were initially tagged by hand.

The human element is what makes it possible for A.I. to do tasks but at the same time gives it human bias.

The human element is what makes it possible for A.I. to complete tasks previously impossible on typical software, but that same human element also inadvertently gives human bias to a computer. An A.I. program is only as good as the training data — if the system was largely fed images of white males, for example, the program will have difficulty identifying people with other skin tones.

“One shortcoming of A.I., in general, when it comes to moderating anything from comments to user content, is that it’s inherently opinionated by design,” said PJ Ahlberg, the executive technical director of Stink Studios New York, an agency that uses A.I. for creating social media bots and moderating brand campaigns.

Once a training set is developed, that data is often shared among developers, which means the bias spreads to multiple programs. Ahlberg says that factor means developers are unable to modify those data sets in programs using multiple A.I. systems, making it difficult to remove any biases after discovering them.

A.I. cannot determine intent

A.I. can detect a swastika in a photograph — but the software cannot determine how it is being used. Facebook, for example, recently apologized after removing a post that contained a swastika but was accompanied by a text plea to stop the spread of hate.

This is an example of the failure of A.I. to recognize intent. Facebook even tagged a picture of the statue of Neptune as sexually explicit. Additionally, algorithms may unintentionally flag photojournalistic work because of hate symbols or violence that may appear in the images.

Historic images shared for educational purposes are another example — in 2016, Facebook caused a controversy after it removed the historic “napalm girl” photograph multiple times before pressure from users forced the company to change its hardline stance on nudity and reinstate the photo.

A.I. tends to serve as an initial screening, but human moderators are often still needed to determine if the content actually violates community standards. Despite improvements to A.I., this isn’t a fact that is changing. Facebook, for example, is increasing the size of its review team to 20,000 this year, double last year’s count.

A.I. is helping humans work faster

A human brain may still be required, but A.I. has made the process more efficient. A.I. can help determine which posts require a human review, as well as help prioritize those posts. In 2017, Facebook shared that A.I. designed to spot suicidal tendencies had resulted in 100 calls to emergency responders in one month. At the time, Facebook said that the A.I. was also helping determine which posts see a human reviewer first.

Facebook Concerned Friend
Getty Images/Blackzheep
Getty Images/Blackzheep

“[A.I. has] come a long way and its definitely making progress, but the reality is you still very much need a human element verifying that you are modifying the right words, the right content, and the right message,” said Chris Mele, the managing director at Stink Studios. “Where it feels A.I. is working best is facilitating human moderators and helping them work faster and on a larger scale. I don’t think A.I. is anywhere near being 100 percent automated on any platform.”

A.I. is fast, but the ethics are slow

Technology, in general, tends to grow at a rate faster than laws and ethics can keep up — and social media moderation is no exception. Binch suggests that that factor could mean an increased demand for employees with a background in humanities or ethics, something most programmers don’t have.

As he put it, “We’re at a place now where the pace, the speed, is so fast, that we need to make sure the ethical component doesn’t drag too far behind.”

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
I paid Meta to ‘verify’ me — here’s what actually happened
An Instagram profile on an iPhone.

In the fall of 2023 I decided to do a little experiment in the height of the “blue check” hysteria. Twitter had shifted from verifying accounts based (more or less) on merit or importance and instead would let users pay for a blue checkmark. That obviously went (and still goes) badly. Meanwhile, Meta opened its own verification service earlier in the year, called Meta Verified.

Mostly aimed at “creators,” Meta Verified costs $15 a month and helps you “establish your account authenticity and help[s] your community know it’s the real us with a verified badge." It also gives you “proactive account protection” to help fight impersonation by (in part) requiring you to use two-factor authentication. You’ll also get direct account support “from a real person,” and exclusive features like stickers and stars.

Read more
Here’s how to delete your YouTube account on any device
How to delete your YouTube account

Wanting to get out of the YouTube business? If you want to delete your YouTube account, all you need to do is go to your YouTube Studio page, go to the Advanced Settings, and follow the section that will guide you to permanently delete your account. If you need help with these steps, or want to do so on a platform that isn't your computer, you can follow the steps below.

Note that the following steps will delete your YouTube channel, not your associated Google account.

Read more
How to download Instagram photos for free
Instagram app running on the Samsung Galaxy Z Flip 5.

Instagram is amazing, and many of us use it as a record of our lives — uploading the best bits of our trips, adventures, and notable moments. But sometimes you can lose the original files of those moments, leaving the Instagram copy as the only available one . While you may be happy to leave it up there, it's a lot more convenient to have another version of it downloaded onto your phone or computer. While downloading directly from Instagram can be tricky, there are ways around it. Here are a few easy ways to download Instagram photos.

Read more