New evidence shows that the social media giant failed to uphold its own rules on incitement to violence and spread of false information in relation to the military coup
Research published today shows that, despite Facebook declaring the situation in Myanmar to be an emergency where they were doing everything they could to keep people safe, users were still being prompted to view and “like” pro-military pages containing content that incited and threatened violence, pushed misinformation that could lead to physical harm, praised the military and glorified its abuses.
The findings suggest that making Facebook a safe space, as the company claims to aspire to, involves far more than finding and taking down content that breaches its terms of service. Rather, it requires a fundamental change to the algorithms that recommend dangerous and hateful messages, and shows the need for governments to legislate to hold Big Tech accountable for amplifying inflammatory content.
Our investigation was based on activity across a clean Facebook account, which began by searching for any pages related to ‘Tatmadaw’. On viewing and ‘liking’ the very first page that was presented, described as “a gathering of military lovers”, Facebook’s algorithm then recommended a series of ‘related pages’. Across the first five recommended pages, which together were followed by almost 90,000 Facebook users, we identified a series of highly problematic content.
This research was carried out in the midst of a situation Facebook had itself identified as an emergency, with the company declaring its crisis centre was “running around the clock” to ensure people were being kept safe. Given the social media platform’s history of being used to incite violence in the country, this is an issue on which the company would rightly have expected scrutiny. And yet it failed to uphold basic standards and problematic content was recommended following a simple and cursory search. This begs serious questions over how the platform operates in places with similar levels of violence, instability and human rights abuse, but less internal and external interest.
Our analysis includes examples of the following types of content, which infringe Facebook’s own rules:
● Incitement to violence: a ‘Wanted’ poster offering a $10 million bounty for the capture “dead or alive” of a young woman. This post included pictures of the woman’s face and a screenshot of what appeared to be her Facebook profile, along with a caption, which included the line “her account has been deactivated. But she cannot run.” Given that this was posted the day after dozens of people were killed in the Yangon neighbourhood named in the post, we consider this to be a credible threat to life.
● Glorifying the suffering or humiliation of others: A video documenting a military airstrike, with laughter in the background and a caption is displayed, part of which reads: “Now you are getting what you deserve.”
● Misinformation that can lead to physical harm: In response to Myanmar’s military shooting indiscriminately into residential neighbourhoods, civil defence groups were formed, raising black flags in what was described as a warning sign of defiance. A post on Facebook places images of black flags in Yangon alongside a picture of a convoy of Islamic State fighters, claiming that ISIS “has arrived” in Myanmar. The post suggests that ISIS and ARSA, the Arakan Rohingya Salvation Army, a small Rohingya insurgent group that no serious commentator has ever suggested is connected to ISIS have “infiltrated the nation”. The Rohingya are a persecuted Muslim minority, so the post plays on anti-Muslim, and specifically anti-Rohingya, sentiment that the military has fuelled through its propaganda for decades. Indeed Facebook has acknowledged the role it played in inciting violence in the genocidal campaign against the Rohingya Muslim minority.
● Misinformation claiming widespread fraud in the recent elections: A statement echoing the claims of “voter fraud” made by the military to justify the coup.
This violates the policy that Facebook enacted in Myanmar shortly after the coup, which says they will remove misinformation claiming that there was widespread fraud in Myanmar’s November election.
● Content that supports violence against civilians: In one of several examples of content that supports violence committed against civilians, a post on 1 March on one of the pages recommended by Facebook contains a death threat against protestors who vandalise surveillance cameras.
“Those who threaten female police officers from the traffic control office and violently destroy the glass and destroy CCTV, those who cut the cables, those who vandalise with colour sprays, [we] have been given an order to shoot to kill them on the spot,” reads part of the post in translation. “Saying this before Tatmadaw starts doing this. If you don’t believe and continue to do this, go ahead. If you are not afraid to die, keep going”.
Naomi Hirst, Head of the Digital Threats Campaign at Global Witness said:
“The situation in Myanmar continues to be both violent and unpredictable, but we know that hundreds of innocent people, including children, have been killed by the military. Facebook itself has identified the coup as an emergency and declared that its ‘crisis centre’ was running around the clock to keep users safe and yet, within a few clicks, we found posts spreading misinformation and glorifying violent and deadly acts. There can be no clearer case for governments to regulate social media platforms and their algorithms than this.”
The content outlined above was posted between the time of the coup and the annual Armed Forces Day celebration on 27 March, and remained up online two months later. Armed Forces Day was the bloodiest day since the coup, in which the military killed at least 100 people in 24 hours, including teenagers, with a source telling Reuters that soldiers were killing people “like birds or chickens”. A 13-year-old girl was shot dead inside her home. A 40-year-old father was burned alive on a heap of tyres. The bodies of the dead and injured were dragged away, while others were beaten on the streets.
On 11th February, just over a week after the coup started, Facebook introduced a new ban on the Myanmar military. This has since been updated and tightened to include the removal of “praise, support and advocacy of violence by Myanmar Security Forces and Protestors”. However, this research shows that, yet again, Facebook has failed to deliver on the commitments it makes to keep its users safe and to ensure its platform is a “safe place” that does not contain material that has the “potential to intimidate, exclude or silence others”.
Facebook is a vital forum for debate, conversation and online socialising everywhere, but in Myanmar that is particularly true. Almost half the country’s population is estimated to use Facebook, and for many people the platform is synonymous with the internet. Mobile phones come pre-loaded with Facebook and many businesses do not have a website, only a Facebook page. This research suggests that, despite protestations and public statements to the contrary, Facebook is not only failing to remove content that breaches its own rules, but its algorithms are actively promoting such content.
Naomi Hirst at Global Witness continued:
“Facebook is one of the wealthiest and most powerful companies in the world and yet, again and again, they fail to uphold the very basics of their community guidelines. The reality is that they have little incentive to when their business model rewards and monetises the most shocking content, promoted by algorithms they refuse to share any details about. In Myanmar, where citizens are being killed on the streets by the military and the democratically elected Government has been forcibly removed, promoting the content we found to users is likely to cause real world harm. The platform operates too much like a walled garden, its algorithms are designed, trained, and tweaked without adequate oversight or regulation. This secrecy has to end, Facebook must be made accountable.”
In response to our findings, we are calling for:
- Governments to follow the lead of the European Union and legislate to regulate Big Tech companies, including their use of secret algorithms that can spread disinformation and foment violence.
- Facebook to ensure that content recommendation algorithms do not amplify content that violates its community standards, investigate failures and make the findings of these investigations public
- Facebook to update its community standards to ban abstract threats of violence against civilians in Myanmar and prevent its platform from being used to recruit soldiers in Myanmar.
We wrote to Facebook to give them the opportunity to comment on these findings; they did not respond.
Post-script. While Facebook didn't respond to our letter giving them the opportunity to comment on this story, they did respond to The Guardian and the Associated Press.
In response to The Guardian, a Facebook spokesperson said "Our teams continue to closely monitor the situation in Myanmar in real-time and take action on any posts, Pages or Groups that break our rules. We proactively detect 99 percent of the hate speech removed from Facebook in Myanmar, and our ban of the Tatmadaw and repeated disruption of coordinated inauthentic behavior has made it harder for people to misuse our services to spread harm. This is a highly adversarial issue and we continue to take action on content that violates our policies to help keep people safe.”