Research/Study
Hate speech and misinformation proliferate on Meta products, with 13,500 policy violations documented in the past year alone
As Meta pushes new products to shareholders next week, the company is losing control of the products it already has
Published
Meta, the parent company of Facebook and Instagram, has repeatedly failed to keep users safe, even as its platforms have contributed to real-world violence and various other harms. Meta frequently boasts about its focus on content moderation policies, yet the company has prioritized new features like the metaverse, short-form video, and artificial intelligence-driven video recommendations, rather than addressing the issues of its current products by developing adequate policies and consistently enforcing them.
On May 25, Meta will hold its annual meeting of shareholders against a backdrop of a less active user base, its slowest revenue growth since its initial public offering, reduced earnings, and, as our latest study suggests: more risk for shareholders as its platforms continue to be plagued by issues that Meta refuses to adequately address, including misinformation, hate speech, and dangerous users.
Over the last year, Media Matters has regularly reported on Meta’s failure to enforce its community standards on Facebook and Instagram (by not identifying, labeling, or removing violations), and we have reported on its very narrow interpretations of these standards, which in some cases were not adequate to begin with.
We compiled all of our reporting from the last year (May 1, 2021, through April 30, 2022) and found over 13,500 violations of Meta’s policies on Facebook and Instagram, with much of the violative content still active or not appropriately labeled.
Key findings of our survey include:
- In addition to former President Donald Trump’s posts that are still on Facebook despite his two-year suspension from the platform, there are nearly 10,000 posts and ads that allow Trump to evade his ban. They include ads from Trump’s joint fundraising committee, posts promoting Trump’s official statements, and posts containing livestreams of Trump’s misinformation-filled post-presidency rallies.
- Even though Meta has fairly robust policies against COVID-19 and vaccine misinformation, Media Matters has identified nearly 1,500 violations of these policies as well.
- Meta’s hate speech policy insufficiently protects transgender and nonbinary users, as well as users who speak languages other than English. Media Matters identified nearly 1,000 violations of Meta’s hate speech policy, including Instagram accounts promoting white supremacy and Facebook posts pushing anti-LGBTQ smears.
- Meta’s labels on authoritative election information have failed to reduce election misinformation on the platform, and Media Matters has identified over 700 violations of Meta’s election policies, including entire Facebook groups dedicated to election misinformation, unlabeled posts with election misinformation, and posts promoting “Stop the Steal” and related rallies.
- Along with inadequately addressing health and election-related misinformation, Meta has failed to properly label a wide array of other types of misinformation, including misleading or false content about Russia’s invasion of Ukraine and Judge Ketanji Brown Jackson. It has also allowed and profited from numerous ads containing this misinformation. Over the last year, Media Matters has identified nearly 500 violations of this policy.
- Over the last year, Media Matters has identified roughly 30 violations of Meta’s dangerous individuals and organizations policy, including ban evasions on Facebook and Instagram and content promoting the QAnon conspiracy theory, which also falls under Meta’s policy.