Skip to main content

Instagram expands ban on content related to self-harm and suicide

Instagram said on Sunday, October 27 that it is banning fictional depictions of self-harm or suicide from its platform, including drawings or memes and content from films or comics that use graphic imagery. It will also remove any post that includes associated material regardless of whether it shows self-harm or suicide imagery directly.

The move follows the company’s decision in February 2019 to prohibit the uploading of graphic photos of self-harm to its platform.

Recommended Videos

Instagram has come under increasing pressure to deal with such imagery following a high-profile case in the U.K. involving 14-year-old Molly Russell who killed herself in 2017 after viewing graphic material on the site. Her father, Ian Russell, believes the platform is at least partly responsible for her death.

In a message posted on Sunday, Instagram boss Adam Mosseri described suicide and self-harm as “difficult and complex topics,” but added that “there are many opinions about how best to approach them.”

Mosseri continued: “The tragic reality is that some young people are influenced in a negative way by what they see online, and as a result they might hurt themselves. This is a real risk. But at the same time, there are many young people who are coming online to get support with the struggles they’re having — like those sharing healed scars or talking about their recovery from an eating disorder. Often these online support networks are the only way to find other people who have shared their experiences.”

He said that after seeking advice from academics and mental health organizations, Instagram was seeking to strike “the difficult balance between allowing people to share their mental health experiences while also protecting others from being exposed to potentially harmful content.”

Mosseri acknowledged that there is still much work to be done in the area, but noted that in the three months following its initial policy change in February 2019, Instagram removed, reduced the visibility of, or added “sensitivity screens” that blur images to more than 834,000 pieces of content, and was able to locate more than 77% of that content before it was reported by users.

Speaking to the BBC, Ian Russell, who has been campaigning for Instagram to be more robust in the way it handles sensitive content, described the platform’s latest move to ban fictional depictions of self-harm and suicide as a “sincere” effort, but wanted to be sure the company would deliver on its promises as it continues to tackle the issue.

Facebook, which owns Instagram and has taken similar action to deal with self-harm and suicide-related imagery, recently used World Mental Health Day to launch new Messenger filters and stickers aimed at encouraging conversations and support for those in need of help.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Instagram wants more ‘original’ content on its platform
Turned on smartphone with Instagram app icon on its screen.

More features, more original content? It seems Instagram is banking on the former to facilitate more of the latter.

On Wednesday, Adam Mosseri, head of Instagram, announced via a tweeted video three of the photo-sharing app's latest features. In the video (and its tweet), Mosseri explains that these three features -- product tags, enhanced people tags, and adjusting rankings for originality -- are part of an effort to support creators and ensure they "get all the credit they deserve."

Read more
Instagram tests subscriptions for creators to turn content into cash
3D Instagram icon.

Would you be willing to pay a regular fee to for exclusive content from your favorite folk on Instagram?

Instagram chief Adam Mosseri thinks you will, so the company has this week begun testing subscriptions.

Read more
TikTok wants to tweak its algorithm to avoid problematic content
TikTok's Play Store listing.

TikTok today announced it would be rolling out additional customization features to its "For You" feed, the endlessly scrolling video page that made the app so popular. The company acknowledged in a release today that its predictive algorithm could reinforce negative experiences by repeating videos about emotionally volatile topics -- like breakups.

TikTok came to popularity on the strength of its algorithm. Bordering on prescience, the app would recommend users' videos that adhered so closely to their tastes, it was even claimed to predict the sexuality of people who had no awareness of it themselves. At the same time, the algorithm's tendency to give you more of what you're interested in can lead to negative outcomes if what you are interested in is having a negative effect on your life.

Read more