Instagram’s Ban on Self-Harm Imagery Is Meaningless

Instagram’s Ban on Self-Harm Imagery Is Meaningless

Instagram has just announced that it will remove images of self-harm from its platform. The social media giant is under increasing pressure to find better ways to moderate users’ content, but this new announcement seems unlikely to address the major challenges facing both Instagram and Facebook.

The promise to remove graphic self-harm imagery came about as a result of pressure from the U.K. government following the suicide of a 14-year-old schoolgirl who had made various posts about depression prior to her death. The U.K. government’s Health Secretary, Matt Hancock, met with Instagram’s Head of Product, Adam Mosseri, who pledged to make the changes as soon as possible.

What’s worrying is that it took pressure from a country’s government to change something that Instagram should have addressed a long time ago. This is not an admirable move on Instagram’s part; the company should have been establishing policies for moderating extreme content long before now. Why didn’t Instagram’s Terms and Conditions already prohibit this sort of graphic content? Furthermore, it seems unlikely that deleting some posts is going to have an impact on users’ mental health; much larger changes would be required to deal with the myriad issues that are starting to emerge as a result of social media.

Instagram’s growth has been prolific but clearly the company has been dragging its feet when it comes to managing the huge volume of content that its users are producing. Nipples (link NSFW) have been a source of problems for several years, and users have suggested that, even after reporting violent content, posts have remained online. It’s relatively easy to write code to enable machines to identify porn; gore is a different story. Furthermore, you’re much more likely to be able to recruit staff to pick out sexual content than you are to find people that are happy to sit for hours sifting for images and videos of graphic violence and death. Evidently, Instagram has not invested heavily enough to moderate the content which generates its ad revenue. There will always be dark corners of the internet where graphic and inappropriate content materializes; given Instagram’s huge role in shaping our society, it should be working incredibly hard to ensure that it is not a part of that dark corner.

Self harm hashtag instagram

Having spent a bit of time exploring the #selfharm hashtag on Instagram, I found that most users are sharing their experiences in search of help, or in order to help others. A small minority are expressing some very disturbing emotions and resulting imagery. Given that the hashtag currently has well over half a million posts, banning self-harm images will probably be meaningless. Users with private accounts and small followings will continue untouched, and obscure hashtags will emerge, shifting in order to avoid detection and censorship.

Self harm instagram

So is Instagram promising to achieve something that simply isn’t possible? Unfortunately, until it makes wholesale changes to how it moderates content, yes. And when you consider that Instagram already actively promotes content that breaches its terms and conditions, there’s little reason to be optimistic that it regards this ban as much more than a reactionary attempt at public relations damage limitation.

As much as I am loath to admit it, Instagram needs government intervention. I’m an advocate of press freedom and the independence of information sharing online, however Facebook and Instagram have consistently demonstrated that they have neither the inclination nor the ability to deal sufficiently with the problems and challenges that its platforms create. Unless there are serious implications — fines and limits to advertising — they will drag their feet indefinitely. 

Instagram is worth more than $100 billion. It's time for it to invest properly when it comes to protecting its users.

Lead image is a composite using an image by Ian Espinosa.

Log in or register to post comments

16 Comments

Michael Holst's picture

"The promise to remove graphic self-harm imagery came about as a result of pressure from the U.K. government following the suicide of a 14-year-old schoolgirl who had made various posts about depression prior to her death."

Hmmm this bit seems kind of strange to me. It sounds like this girl's posts were a "Cry for help" and if listened to, would should have been the red flags signaling intervention by those who were close to her.

By removing self-harm wouldn't it remove the much needed visibility those people need? Am I missing something?

Andy Day's picture

This is a really good question. Not sure what the answer is.

Kang Lee's picture

One could argue that if she could not post on Instagram, she would've used another medium that could've been a more obvious cry for help. Human psychology is complex

Michael Holst's picture

That's a possibility. I just think it seems strange to cut off an existing communication channel without knowing for sure if it could have been used for a form of prevention.

Our digital identities are pretty well tracked/documented by big data companies and are constantly being analyzed to serve us better and more relevant ads. I wonder if this would have been an opportunity to create suicide prevention "ads" that create a type of outreach based on the digital behavior of those who are at risk of self harm as a result of depression and anxiety. It's very typical to layer on social listening tools to ad serving.

Just thinking out loud (online)...

I promise this would be easy to do. There have been cases of 'big data' knowing a person is pregnant before they do, https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html?pagewan...
My aunt mentioned one thing in a message to me this morning about how I should take up writing again, about an hour ago, I had a sponsored ad come up on my fb feed for writing courses and another for 'how to make a career with your pen'.

Michael Holst's picture

I work in digital advertising so I'd like to clear up some things.

The Target article doesn't say that they know a person is pregnant before the mother does. It says they know WHEN the mother does because her habits change as she prepares for the birth. It's really just a deep dive into CRM data that businesses use to get to know you and how to better serve you. Creepy? Maybe, but has been a growing practice for a long long time.

Your aunts message was most likely not a spur of the moment idea. She was probably looking at things that had to do with writing because she wants you to start again. As a result, you could have been added to a LAL (look-a-like) audience which is just an audience that has similar interests, activities, locations, and habits on facebook. You could share a lot of similar page likes and values on facebook which what I would think caused you to be served the ad. If you click on it, you're going to get competitor ads because your interaction with the ad indicates an opportunity. There are also called competitive conquesting audiences which a business can use to target people who fit their demographic and are patrons of a competitor.

Facebook doesn't allow me to target audiences with ads based on private conversations. Because that is protected information and would be a huge breach of privacy laws.

Campbell Sinclair's picture

The UK Gov is also considering banning social media if FB doesn't instigate change. So , one girl out of 10s of millions of users is enough to consider banning social media. An overreaction to a very small amount of users who need help and banning social media will not prevent suicide and self harm at all.

David Love's picture

They are too busy fighting the real crime, side boob and side bum pics.

Ryan Burleson's picture

Instagram should just remove Instagram, problem solved. The world may improve in some small way.

imagecolorado's picture

I always thought that posting on Instagram was self harm.

greg tennyson's picture

Why is that joint so thin?

Scott Wardwell's picture

What the UK Health Minister was really concerned about is that this girl was putting her intentions out there and nobody within the ministry picked up on the signals and offered help. Is suicide prevention something their socialized medical system really prioritizes resources on? The loss of this girl is not the issue, it is that they have an intrinsic social problem and they don't want that fact known. Just another instance of rationed delayed care that the UK excels at.

Michael Holst's picture

"Just another instance of rationed delayed care that the UK excels at."

Are you speaking from first hand experience?

"Is suicide prevention something their socialized medical system really prioritizes resources on?"

Probably not to the same degree as in the United States. UK ranks 109 for suicide rate as of 2016 - W.H.O. In comparison the United States ranks 34.

Suicide prevention isn't a simple thing to tackle as their are many social and economic factors that have a part to play.

Aaron Bratkovics's picture

Thats fucked. Express yourself.

Sure. But on your own platform. Easy solution!

Usman Dawood's picture

Banning such images could be very helpful. There's a great deal of data that demonstrates how right after a suicide has been widely publicized the suicide rate goes up. More people commit suicide when they see someone else has. Psychologist are still working on trying to figure out exactly why but many believe it's because it appears more socially acceptable when it' heavily publicized even if the message is to not do it.

I don't personally have the answer but I think it's logical to assume that banning these types of images could prevent more people from doing it; especially on a platform like Instagram because of its huge reach and accessibility.