Béjar spoke before Congress last week, where he was introduced as an engineer “who was hired specifically to help prevent harms against children,” and he detailed his firsthand experience with how the company “executives, including Zuckerberg, knew about the harms Instagram was causing but chose not to make meaningful changes to address them.”
In an interview with The Associated Press, Béjar said, “I can safely say that Meta’s executives knew the harm that teenagers were experiencing, that there were things that they could do that are very doable and that they chose not to do them.”
Béjar’s ignored warnings are part of a larger trend of Meta failing to address harmful content on its platforms, specifically with respect to children and teens. The company currently faces a lawsuit from multiple states for allegedly “knowingly using features on Instagram and Facebook to hook children to its platforms.” The state of Massachusetts also filed another lawsuit against Meta for Zuckerberg’s alleged dismissal of concerns around Instagram’s impact on the mental health of users.
Media Matters has previously reported on Meta’s moderation failures on Instagram, including the platform recommending gimmicky weight loss posts, allowing the spread of COVID-19 misinformation, hosting bigoted anti-LGBTQ accounts that have baselessly referred to LGBTQ people as “groomers,” and failing to stop the proliferation of hate speech. The “narrow rules and unreliable automated enforcement systems” described by The Wall Street Journal fail to take into account wider context, in-group subtext, or ways that the platform can be manipulated to spread offensive rhetoric or dangerous misinformation.
For example, Media Matters reported in 2022 that users seemingly avoid moderation on Meta's platforms by commenting on posts with code words or phrases, intentional misspellings, or emojis. While this is not a new tactic for those trying to avoid moderation, Instagram is clearly still struggling to prevent users from manipulating the platform to spread explicit hate speech and other harmful content — despite internal warnings from experts like Béjar.