Facebook is letting COVID-19 vaccine misinformation flourish in its comment sections
Users are exploiting the platform’s lax moderation by posting anti-vaccine information there, often encouraged by group administrators
Written by Rhea Bhatnagar & Clara Martiny
Research contributions from Kayla Gogarty
Published
Facebook’s comment sections are poorly moderated, allowing users to spread dangerous misinformation through the use of public and private groups during the pandemic. Now, Media Matters has identified comments pushing scare tactics and conspiracy theories to encourage vaccine hesitancy, often in violation of Facebook’s stated policies.
On September 17, The Wall Street Journal published an investigation detailing Facebook’s willful disregard of its own research showing that the platform was fomenting vaccine hesitancy. The article found that in March, roughly 41% of comments on English language vaccine-related posts “risked discouraging vaccinations,” and top officials of global health institutions were concerned about the negative effects of such uncontrolled anti-vaccine comments.
In response, Facebook published a blog post titled “What the Wall Street Journal Got Wrong,” claiming that health organizations continue posting on Facebook because the platform’s measurements show that their posts “effectively promote vaccines” and that there has been a supposed 50% decline in vaccine hesitancy among users. However, Facebook does not allow researchers to access internal information to determine whether that’s really the case.
But researchers can access what is visible in the comments section — and what we found is not good.
Media Matters found that Facebook users are exploiting the platform’s lax approach to moderating comments by posting links and screenshots of anti-vaccine misinformation in the comment sections of posts in both public and private groups, often encouraged by group administrators and moderators. This is happening across various Facebook group networks -- including groups opposing vaccines or masks.
In the past, Facebook has stated that it relies on group administrators to “slow down” toxic conversations and even urged them to “nurture their groups.” Yet, Media Matters found that in multiple anti-mask and anti-vaccine groups, these administrators Facebook trusts to control their communities are instead giving their members instructions on how to evade “Zuck’s radar” and avoid being shut down.
In these Facebook groups, administrators and moderators are specifically encouraging members to use the comments sections to avoid moderation, noting that members should have a title post “as soft and neutral as possible” and then put the links, memes, and other content in the first comment of the post.
Administrators and moderators are also cautioning their members on other appropriate posting methods -- encouraging them to follow guidelines to avoid removal, such as abbreviating “forbidden words (V word & C word),” referring to ”vaccine" and “COVID.”
Unfortunately, due to Facebook’s lax moderation of the content on its platform, these evasion techniques are working, and misinformation is thriving on the social media site. Users can spread dangerous medical misinformation through the comments section with captions as simple as “in the comments.”
For example, in the public group “Anti-Vacks Lives Matter , #Operation1009,” several ban-evasion techniques are being used to spread distrust in the COVID-19 vaccine. A TikTok video linked in the comments -- which has now accumulated nearly 4 million views -- shows unvaccinated nurses being terminated for refusing to follow a hospital's vaccine policy. After users commented in support of the fired workers, an administrator urged a user to change their emoji reaction to avoid getting shadow banned by the platform.
These moderation-evasion strategies are not contained to anti-COVID vaccine groups, nor are they new. For instance, several right-leaning groups attacking critical race theory have been taking advantage of Facebook’s carelessness when moderating comments to push misleading and dangerous narratives on that topic.
And the tactic has been used for years, giving Facebook time and countless opportunities to improve its moderation skills. But with the weak approach the platform takes when combating misinformation and the consistently empty promises to better itself, it’s not surprising to see how widespread this misinformation messaging approach has become.