Update (7/13/23): In a statement to Media Matters, Meta did not dispute that the company has not extended its fact-checking program to Threads. A Meta spokesperson said: “Our industry leading integrity enforcement tools and human review are wired into Threads. Like all of our apps, hate speech policies apply. Additionally, we match misinformation ratings from independent fact checkers to content across our other apps, including Threads. We are considering additional ways to address misinformation in future updates.”
Meta launched its new text platform Threads, announcing users on the new platform would need to follow Instagram’s content moderation policies. But Media Matters has identified misinformation and hate speech proliferating on Threads, as the company carved out exceptions to the policies: Meta launched Threads without the fact-checking program that aims to prevent the spread of misinformation on Facebook and Instagram and seemingly abandoned hate speech policies that govern its other platforms.
On July 5, Meta launched Threads, a companion app to Instagram which seems to be capturing users who are dissatisfied with the chaos that has plagued Elon Musk’s Twitter.
Head of Instagram Adam Mosseri has claimed that the goal of Threads “isn’t to replace Twitter,” but to “create a public square” for communities “that are interested in a less angry place for conversations.” Meta CEO Mark Zuckerberg echoed that sentiment.
But Threads’ link to Instagram helped the platform grow fast: 100 million users signed up in less than a week, including Nazi supporters, anti-gay extremists, and white supremacists. Media Matters previously found that these right-wing users immediately tested the platform’s content moderation limits — which Meta claimed were consistent with Instagram’s community guidelines — and posted harmful rhetoric and misinformation. Meta has allowed much of this content to remain, seemingly carving out exceptions to Instagram’s policies and even backtracking when right-wing misinformers complained that users were getting warning labels before following them, claiming that it “was an error and shouldn’t have happened.”
Meta launched Threads without the company’s fact-checking program, allowing misinformation to proliferate
With Threads, Meta has made clear that it has no intention of managing misinformation on the platform. Meta’s company spokesperson acknowledged at least one of these gaps, saying the company “will not extend its existing fact-checking program to Threads” even though Meta has a program in which it works “with third-party fact-checkers … to help identify, review and label false information” on Facebook and Instagram. (The spokesperson also suggested that crossposts on Facebook or Instagram rated false by fact-checkers on those platforms would carry their labels to Threads.)
Media Matters has identified various forms of misinformation on Threads since it launched less than a week ago, including regarding the 2020 election, COVID-19 and vaccines, and gender-affirming care.
Additional notable examples include:
Meta has also seemingly abandoned hate speech policies that govern Instagram, letting right-wing accounts post hate speech on Threads
While Instagram has a fairly robust hate speech policy that, among other things, prohibits the anti-LGBTQ “groomer” slur, it has struggled to enforce that policy, and Meta has seemingly dropped any pretense of enforcing it on Threads.
Media Matters identified dozens of examples of hate speech on the new platform, including an instance in which Meta allegedly restored a post from anti-LGBTQ account Libs of TikTok that was removed as “hate speech.”
Far-right figures have posted extreme anti-LGBTQ hate speech to Threads, including posts calling trans people “demonic influenced souls,” calling LGBTQ identity a “social contagion,” and falsely accusing LGBTQ people of supporting “child grooming.”
Users have also posted racist, antisemitic, and anti-immigrant rhetoric on Threads, including sharing video of white supremacist Nick Fuentes saying the N-word, complaining that the platform returns no search results for the N-word, claiming that “migrants make neighborhoods more dangerous,” and asking “Where is @hitler.”
Meta has repeatedly prioritized revenue and growth over the safety of its users. The company seemingly launched this new platform to capture dissatisfied Twitter users, but it has once again failed to anticipate how bad actors would use the platform, launching without adequate policies or enforcement mechanisms to address the pernicious threat of misinformation and hate speech.