A year ago, Facebook contributed to a military coup in Myanmar. Its response has been an appalling failure.
Written by Rhea Bhatnagar
Published
After Myanmar suffered a military coup on February 1, 2021, analysis showed that Facebook had played a large part in fomenting the conflict. Though it promised to improve its practices, the world’s largest social media platform continues over a year later to do the bare minimum to save face while failing to support its users in Myanmar, citing unmet promises and unenforced new policies as progress without investing in meaningful solutions.
Last year, Media Matters compiled an in-depth timeline examining Facebook’s role in Myanmar and major instances in which the platform has failed its 28.7 million Burmese users -- more than half of the nation’s population. Since then, the situation in Myanmar has only worsened. The country is witnessing “violent reprisals” by government forces, “including the torture and killing of 40 civilians in July 2021.” The BBC reports:
The Assistance Association for Political Prisoners (AAPP), which keeps a toll of those killed, jailed or detained by the military, says 1,503 people have been killed since the military regime came to power. The US-based organisation Acled, which compiles figures from news reports and publications by human rights organisations, says about 12,000 may have died.
The link between online Facebook posts and offline violence had been well-documented for years prior to the coup. In 2017, unmoderated hate speech and threats against the Muslim Rohingya minority exploded on the platform. An independent investigation commissioned by Facebook in 2018 also found that the company had not done enough to stop its platform from being used to foment real-world harms. In fact, Facebook’s parent company Meta is currently facing a $150 billion lawsuit “over allegations that the social media company did not take action against anti-Rohingya hate speech that contributed to violence.”
Facebook promised to do better. It has not.
Even though activists have been warning the platform for a year that its actions aren’t enough, questionable pro-military content is still served up to its users and the platform has not announced legitimate investments, like significantly increasing resources and staffing, to provide support to its Burmese users.
Responding to the coup, Facebook announced “symbolic actions” but failed to rein in hate speech and violence
When the military seized the government in February 2021, Facebook was quick to release new policies “to support our community in Myanmar during this time,” stating:
Key among these is the decision to significantly reduce the distribution of all content on Facebook Pages and profiles run by the Myanmar Military (“Tatmadaw”) that have continued to spread misinformation. In line with our global policies on repeat offenders of misinformation, we will also no longer be recommending them to people.
The company also placed a ban on “the remaining Myanmar military” and “military-controlled state and media entities," preventing them from posting on Facebook or Instagram, as well as instituting new policies against praising violence.
Though Facebook did ban some entities from its platform, it was unable to fully uphold its own standards. Human rights group Global Witness found that Facebook’s algorithm was actively promoting posts that incited violence against anti-military protesters -- a clear violation of policy.
Researchers found that after a user liked a Tatmadaw fan page, which was not seen as a violation of community guidelines, Facebook began suggesting pro-military pages that did contain abusive content, including posts that infringed on both the company’s Myanmar-specific policies and its overall community guidelines. Global Witness also noted that despite updates to Facebook's policies -- such as an April update calling for the removal of “praise, support or advocacy for the arrests of civilians by the military and security forces in Myanmar” -- Facebook did not retroactively remove all related content.
Though the Global Witness study was released in June 2021, unfortunately many of the advocacy group’s findings remain relevant. In November, The Associated Press reported on Facebook’s internal documents, which found that hate speech and misinformation were still spreading on the platform -- with examples readily available to the public.
Ronan Lee, a visiting scholar at Queen Mary University of London’s International State Crime Initiative, told the AP, “Facebook took symbolic actions I think were designed to mollify policymakers that something was being done and didn’t need to look much deeper.”
This is Facebook’s typical approach to external criticism, and Myanmar is no exception.
Facebook designated Myanmar as a “Tier 1” risk for content moderation but hasn't devoted meaningful resources to solving the problem
Researchers have long called for Facebook to designate more resources to Myanmar -- specifically Burmese-language speakers who can moderate hate speech.
Around 2019, “Facebook crafted a list of ‘at-risk countries’ with ranked tiers for a ‘critical countries team’ to focus its energy on, and also rated languages needing more content moderation,” according to the AP. Myanmar was listed as a “Tier 1” at-risk country and Burmese was regarded as a “priority language.” A high rank like this meant the country was designated resources -- such as “dashboards to analyze network activity” -- to ensure the safety of its users. Facebook has neither clarified what these resources are, nor given its reasoning for the rankings.
In November, Facebook told the AP that it had “built a dedicated team of over 100 Burmese speakers,” but it has not said publicly how many work as content moderators. Instead, according to the AP, Facebook’s internal documents “show Myanmar became a testing ground for new content moderation technology, with the social media giant trialing ways to automate the detection of hate speech and misinformation with varying levels of success.” But Facebook’s automated moderation technology has already failed to protect the people of Myanmar, with incorrect translations allowing threats and hate speech against the country’s Muslim minority to evade detection and thrive on the platform.
Facebook released another policy early last year enabling new safety features within the country that allowed users to lock their profiles to deal with the surge in doxxing on the platform. This policy requires Facebook’s users to activate these new features, rather than the platform doing it for them, and some people affected may have difficulty activating it – particularly users with less technological familiarity. It’s also important to note that while this feature does give some power to users, it also conveniently removes direct liability from Facebook. Ultimately, this act does nothing to rein in hate speech on the platform.
Facebook has made several public statements about the company’s supposed commitment to bettering its standards and safety practices in Myanmar, yet it has once again failed to institute any real change. Until the company chooses to focus its resources on actual moderation -- such as investing in robust content moderation teams in every language present on the platform -- Facebook will continue to be a platform for human rights violations around the world.