Skip to main content

Trending

  • Fox/Dominion Lawsuit
  • Meta
  • Twitter

Social Media Menu

  • Facebook
  • Instagram
  • Twitter
  • YouTube

Utility Navigation

  • Take Action
  • Search
  • Donate

Media Matters for America

Main navigation

  • News & Analysis
  • Research & Studies
  • Audio & Video
  • Archives

Media Matters for America

  • Nav
  • Search

Main navigation

  • News & Analysis
  • Research & Studies
  • Audio & Video
  • Archives

Trending

  • Fox/Dominion Lawsuit
  • Meta
  • Twitter

Utility Navigation

  • Take Action
  • Search
  • Donate

Social Media Menu

  • Facebook
  • Instagram
  • Twitter
  • YouTube
Instagram logo over a bright orange background
Andrea Austria / Media Matters

Instagram is letting accounts promoting hate speech go unchecked

Written by Camden Carter

Published 02/18/22 2:39 PM EST

Share

  • Email
  • Twitter
  • Facebook
  • Print

Comment

  • Comments

Content warning: This article contains examples and descriptions of hate speech.

Media Matters has identified several Instagram accounts that are dedicated to generating hate speech which are accumulating significant followings. We reported five of these accounts to Instagram via the app’s reporting channel and were informed by the platform that four of them did not violate its community guidelines. As of publishing, the fifth account is still under review. The continued presence of these accounts shows that Instagram’s reporting channels are inadequate and it is easy for users to circumvent its current content moderation policies. Meta, Instagram’s parent company, has once again fallen short of its promises to improve its detection and removal of extremist content. 

Instagram has long been aware of the prevalence of hate speech on the platform and has introduced new features to try to address the issue. While Meta has publicly claimed it's working to expand its understanding and policies around hate speech on its platforms, it has been slow to evolve. It was only in 2019 that the company published a blog post explaining its new understanding of the overlap between white nationalism and white supremacy, and it began including these topics in its policies. 

Meta’s current policy on hate speech prohibits users from posting hateful content, defining hate speech as “a direct attack against people — rather than concepts or institutions— on the basis of what we call protected characteristics.” However, Media Matters has identified several accounts that violate these policies, but remain active on Instagram. At least one of these accounts has accumulated upward of 55,000 followers, and some have been on the platform since 2020.

The language in Meta’s reporting policies for Instagram is predominantly centered around individual pieces of content and ignores that a narrative of hate can be created not just through an account's individual posts, but also through its overall ethos -- through the content it shares, comments from other users, its bio, handle, and name. Very few of these accounts post individual content (captions, videos, or images) containing hate speech that explicitly violates Instagram’s policies. Rather, they post or repost content that develops a narrative of hate and encourage followers to interpret it that way. For example, one account with over 17,000 followers claims it is “dedicated to [the] showcasing and appreciation of Jewish accomplishments and prominent Jewish figures.” At first glance, the account seems to have a genuine and positive intent. But a closer look reveals that this account is satirical and deeply harmful. Each post showcases a “prominent Jewish figure” and the account uses these posts to weave the antisemetic “puppet master” conspiracy theory. 

A screenshot of comments discussing how an account that claims to be celebrating Jewish people is satirical

With many of these accounts, the comment sections are typically where the most explicitly atrocious language can be found. Some accounts post screengrabs of other content such as an article headline, a TikTok video, or a tinder profile with no or minimal caption. Followers then sound off in the comments, mocking or spewing hate at the subject of the post. For example, a post from one of these accounts that's simply a screenshot of an article about a prominent trans celebrity garnered hundreds of transphobic comments, including deeply personal attacks on the celebrity’s identity, body, and mental wellness. In some cases, the volume of comments is so high that it is virtually impossible to report all of them. 

To avoid comment moderation, users often use code words or phrases, intentional misspellings, or emojis. While this is not a new phenomenon, Instagram is clearly still struggling to manage it and users continue to find ways to manipulate the app to allow them to use explicit hate speech. We found several examples on these accounts of users replying single letters to one another's comments, working together to spell out a slur. While a single letter would not seem harmful as an individual comment, when viewed as a whole the message is clearly hateful. 

A screenshot of a comment thread of users spelling out a racial slur

Comments violating Instagram’s policies are just one aspect of the problem. The accounts Media Matters identified craft their narrative in such a way that they don’t need to rely on user comments. For example, one account, which has turned off its comments section, posts exclusively about local violent crime cases. Based on the account’s individual posts and captions, it does not appear to violate Instagram’s policies. However, the account, which has accumulated over 11,000 followers, is pushing a clearly racist narrative unchecked by Instagram, as each post is centered exclusively around cases that allege Black people have committed violent crimes against white people. 

After Media Matters discovered these accounts, we followed the appropriate prompts to report five of them to Instagram under the “hate speech” category. The platform responded with notifications saying it won’t take any action against four of these. As of publishing, one account is still in the “review” stage, pending a decision. One of the reported accounts has since disappeared, after remaining on the platform for at least eight months and accumulating over 23,500 followers.

Screenshots of the four steps to report an account for hate speech on Instagram

The ways these accounts operate make it difficult to effectively report them, but Instagram’s reliance on user reporting is flawed to begin with. The platform has acknowledged that while it uses artificial intelligence to detect violative content, it also partially relies on users reporting this content when they come across it. While user reporting may help to address issues around harassment and bullying, it is significantly less effective when it comes to accounts that exist to foster a bigoted bubble of like-minded communities, such as the ones highlighted in this piece. While giving users features to hide, block, and report violative content, Instagram’s algorithm also recommends users content that it believes they will like. This means users who are most likely to report content containing offensive language or hate speech, are the least likely to actually encounter this content on the platform. Inversely, users who like such content and thus are least likely to report it are the ones who are most likely to encounter it. 

This is not the first time Instagram has been caught failing to moderate this type of content. After an incident last summer, when Black British footballers were harassed through the racist use of emojis and other hate speech in comments, Instagram was forced to acknowledge its role in the problem. When the BBC asked platform head Adam Mosseri about it in July 2021, he stated, “The issue has since been addressed.” 

The Latest

  1. On Elon Musk’s Twitter, a reinstated QAnon influencer launched a conspiracy theory that left a company facing false pedophilia accusations

    Article 03/28/23 11:29 AM EDT

  2. Daily Wire's Michael Knowles says “guns have nothing to do with” Nashville shooting

    Video & Audio 03/28/23 11:22 AM EDT

  3. Fox guest suggests we can't ban assault weapons because the government is untrustworthy

    Video & Audio 03/28/23 10:24 AM EDT

  4. Fox News abruptly cuts off Donald Trump as he's criticizing Ron DeSantis for “voting against social security, voting against Medicare”

    Video & Audio 03/27/23 10:23 PM EDT

  5. Fox News guest blames “defund the police” for shooting at Tennessee school

    Video & Audio 03/27/23 5:26 PM EDT

Pagination

  • Current page 1
  • …
  • Next page ››

In This Article

  • Facebook

    Facebook-MMFA-Tag.png
Comments
0 Comments
Share Count
0 Shares

Related

  1. Instagram chief Adam Mosseri's interview on Breakfast Club was full of evasive half-truths

    Article 08/20/21 12:13 PM EDT

  2. Four times Facebook ignored its own research showing its platforms spread hate

    Article 12/01/21 3:30 PM EST

  3. Social media companies don't care about safety

    Article 02/16/22 4:44 PM EST

Media Matters for America

Sign up for email updates

Footer menu

  • About
  • Contact
  • Corrections
  • Submissions
  • Jobs
  • Privacy Policy
  • Terms & Conditions

Social Media Menu

  • Facebook
  • Instagram
  • Twitter
  • YouTube

© 2023 Media Matters for America