The moderation of social media platforms has fallen under intense scrutiny in recent months. A recent study suggests that while some content on Instagram is removed almost as soon as it is reported, other violations are allowed to stay online indefinitely, raising difficult questions about how the company is dealing with a growing problem.
Instagram’s parent company, Facebook, has not had a good year. A few weeks ago, the British parliament seized internal documents following Mark Zuckerberg’s refusal to appear before MPs to answer questions. This follows a series of scandals: Cambridge Analytica, alleged attempts to undermine George Soros, and revealing users’ private photographs to app developers, to name but a few.
Writing earlier this year, Mason Gentry investigated how effectively Instagram responded to reports of various types of content violation and came away with some worrying results. While he acknowledges the unscientific nature of his research, it raises concerns about how the platform deals with posts that breach its terms and conditions. Pornography, it seems, is dealt with almost immediately; by contrast, violence, gore, and animal cruelty can stay online indefinitely, though sometimes hidden behind a warning.
As of June this year, Instagram passed one billion users. Moderating Facebook’s two billion users is posing a huge challenge, and Instagram seems no different, though Instagram has yet to be accused of contributing to a genocide. For some types of content, it can be understandably tricky to know where to draw the line. For example, the #gore hashtag (link deliberately not included) contains lots of incredible work with prosthetics and fake blood, some of which is so realistic that it's not clear what is real and what has been created. Elsewhere, however, the line is pretty obvious; it took me only a few clicks to find myself watching content so violent that I don't want to describe it. Lots of it.
As a photographer absorbed with curating my profile and admiring the work of some amazing artists, it’s not always apparent how much of Instagram is filled with truly terrible things. I’ve written before about how Instagram is a cesspit of populist content that is driven by clicks as opposed to quality. I’ve also complained at length about Instagram’s clear reluctance to combat freebooting on its platform, happy to see content stolen as long as users stay in the app, consuming its adverts. What I failed to realize was how much of Instagram is violent, graphic, and seemingly free of moderation. Around the world, thousands of 13-year-olds will be receiving new electronic devices this Christmas, many of them no doubt opening new Instagram accounts. Terrifyingly, those children, with all the parental controls in place, could in just a few clicks be watching footage of animals being abused, or, as I just discovered, people being executed. In Gentry’s experience, reporting this content seems to make little difference.
Why does pornography get removed so much more quickly? My guess is that finding human moderators willing to sit and look for sexual content is much easier than finding those happy to watch violence. Given that Facebook has established a reputation for failing to give its own moderators the support they need to deal with mental trauma, this would not be surprising.
As I noted in my rant against freebooting, you have to question whether there’s any incentive for Instagram to address this problem. Clicks are clicks, and clicks are ad revenue. For companies that are worth such vast amounts of money, we need to increase the pressure on them to be more accountable and to invest some real resources in moderation rather than the token efforts that have been put in place so far. If you have billions of users creating billions of dollars in profit, moderating their content comes with the territory.