In early June, Gambia initiated discovery proceedings before the United States D.C. district court to compel Facebook to provide data on “suspended or terminated” Facebook accounts of Myanmar military institutions and personnel. The action is intended to gather information that could support Gambia’s case against Myanmar before the International Court of Justice (ICJ) for Myanmar’s alleged violation of the Genocide Convention concerning the Rohingya people. Beyond its potential evidentiary value, the discovery request reveals both Facebook’s haphazard content decisions involving Myanmar state actors and the level of transparency that this kind of decision-making accordingly demands.

To recall, Facebook banned Myanmar’s Commander-in-Chief Min Aung Hlaing and other military officials from the platform in 2018. The move coincided with the release of a United Nations Fact-Finding Mission on Myanmar (FFM) report (summary here, full report here), which found reasonable grounds that serious human rights violations occurred in the country, including crimes under international law encouraged through hate speech constituting incitement to violence. The FFM specifically found that the internet and social media platforms such as Facebook “enabled the spread of … hateful and divisive rhetoric” targeting the Rohingya. This is especially important in a country such as Myanmar, where, as the FFM observed, “Facebook is the Internet” (para. 1345). Given the ubiquity of Facebook in Myanmar, the FFM found it “unsurprising that propagators of hate speech resort to Facebook to wage hate campaigns, amplify their message, and reach new audiences” and noted that the platform “has also been widely used to spread misinformation, including by government officials and the Tatmadaw” (para. 1346).

Facebook has since removed additional military-linked accounts for other reasons.

The rationale underlying Facebook’s high-profile content decisions acquires more significance when considered in the local context. In the same week that news of Gambia’s discovery proceedings broke, Myanmar press reported a related newsworthy development: the Myanmar military is officially back on Facebook with the accounts “Tatmadaw True News Information Team” and “Zaw Min Tun.” According to military spokesperson Brigadier General Zaw Min Tun: “As there is both correct and incorrect news, we have decided to use Facebook, which is widely used in Myanmar, in order to swiftly provide the people and media with accurate news.”

Myanmar’s upcoming national elections form the backdrop of this development. Elections are expected to take place in November this year. As I have analyzed here, the image of state actor and private individual being treated similarly as Facebook users is a powerful one, especially in a country with a long history of censorship and violence. Elections can deepen existing fault lines and complicate the balancing of voice and safety on the platform. Under Facebook’s current policies, for instance, political figures’ posts are exempted from third-party fact-checking, except when such content “endangers people.” A review of Facebook’s past content decisions in Myanmar can guide assessments of when the public interest value of election-related content breaches the threshold of harm. Knowing the reasons behind the military account removals in 2018 would also inform Facebook’s approach in addressing these new military accounts in 2020.

Gambia’s Discovery Request

In the request, Gambia asks for electronic content, specifically documents and communications that were “produced, drafted, posted or published” by the following individuals and government agencies whose Facebook accounts were “suspended or terminated”:

  • Commander-in-Chief Senior-General Min Aung Hlaing (both official and personal accounts)
  • Other individuals and organizations that Facebook banned in 2018
  • Individuals enumerated in this news report
  • Myanmar Police Force
  • Myawaddy, the military’s television network
  • Myanmar 33rd Light Infantry Division
  • Myanmar 99th Light Infantry Division
  • Facebook pages taken down for coordinated inauthentic behavior
  • Facebook accounts, pages, and groups as well as Instagram accounts belonging to or controlled by “Myanmar state officials, representatives, entities or groups acting or suspected of acting in coordination with Myanmar state entities whose accounts were taken down for coordinated inauthentic behavior”

The request also encompasses all documents of any related internal investigations conducted by Facebook of content policy violations concerning these individuals and entities.

The focus on suspended or terminated military accounts tracks the FFM report to an extent, but the FFM’s findings go beyond the military in attributing responsibility. Indeed, the FFM found that “Buddhist charity and welfare groups, movements such as 969 and MaBaTha, and Rakhine nationalist political parties and individuals have waged a campaign for the protection of ‘race and religion’” in Myanmar while using “dehumanising and stigmatising language targeting the Rohingya, and Muslims” generally (para. 1320). However, it was not solely private individuals and groups who worked to foster hatred of the Rohingya. The FFM found that government authorities in Myanmar, “not only condoned” such hate speech, but also “actively participated in and fostered” messages of hatred against the Rohingya (para. 1327 et seq.). government actors nevertheless remain on Facebook. The government newspaper, Global New Light of Myanmar, operated by the Ministry of Information and whose previous publications depicted all Rohingya, including civilians and children, as “ARSA [Arakan Rohingya Salvation Army] terrorists” still maintains a Facebook presence. The Facebook page of the State Counsellor Information Committee and the Facebook account of the government spokesperson also remain despite sharing anti-Rohingya rhetoric and false information in the past. This raises broader questions on where Facebook draws the line, especially in a context where a U.N. body makes relevant findings on state actors across the board, transcending party lines and civilian-military distinctions.

This is not an argument against content moderation. Nor is it an argument against account removal per se or for account removal of all state actors. Account removal may well be justified given the FFM’s findings as well as the extreme belatedness of Facebook’s measures by 2018. The U.N. Special Rapporteur for freedom of expression acknowledged account removal as one tool for moderating problematic content in a 2019 report, alongside other tools: downranking, affixing warnings and labels, promoting counter-messaging, and developing ratings to highlight a person’s use of prohibited content, among others. As we’ve seen in the varying content approaches of Twitter and Facebook on President Trump’s controversial post – “when the looting starts, the shooting starts” – in the context of widespread police brutality across the United States, the content moderation toolbox is larger than commonly thought, and deciding which tools to have, which ones to use, or whether to use one at all is a political choice that social media platforms continue to make.

As Facebook explores other types of responses at its disposal, the necessity of explaining the use of one tool over another, against one state actor but not another, becomes stark. Parenthetically, account removals are beyond the remit of the Oversight Board when it starts to hear cases later this year, as their immediate mandate concerns content removals.

Facebook’s Content Moderation in Myanmar

The justification for Facebook’s high-profile Myanmar-related content decisions differs in each case. In early 2018, Facebook removed the accounts of Buddhist monks Wirathu, Thuseitta, and Parmaukkha, as well as the Buddhist organizations Ma Ba Tha and the Buddha Dhamma Prahita Foundation for violating Facebook’s Dangerous Individuals and Organizations policy. In early 2019, Facebook controversially banned ethnic armed organizations on the platform for supposedly violating the same content policy. The most high-profile ban involved the Facebook accounts of Myanmar’s Commander-in-Chief and other military officials in August 2018. Unlike the other cases, however, Facebook did not expressly invoke any particular content policy to justify the ban. Facebook referred to the FFM report in its announcement, but the company did not directly attribute the military account removals to the FFM’s findings. In the words of Facebook:

[W]e are banning 20 individuals and organizations from Facebook in Myanmar — including Senior General Min Aung Hlaing, commander-in-chief of the armed forces, and the military’s Myawady television network. International experts, most recently in a report by the UN Human Rights Council-authorized Fact-Finding Mission on Myanmar, have found evidence that many of these individuals and organizations committed or enabled serious human rights abuses in the country. And we want to prevent them from using our service to further inflame ethnic and religious tensions.

As the Washington Post reports, a Facebook official unofficially disclosed that the FFM report simply “put a deadline” on what Facebook itself had already set out to do. A Facebook spokesperson also represented to Reuters that reports from various sources, including but not exclusively the FFM report, informed Facebook’s decision to take down the military accounts. The spokesperson failed to explain Facebook’s approach toward other government accounts.

From the above statement, one could infer that the military account removals likely violated one, some, or all of Facebook’s Violence and Incitement, Dangerous Individuals and Organizations, and Hate Speech policies. The lack of clarity is precisely the problem here. A proper articulation of the content policy violations justifying the military account removals can offer insight as to why such accounts were removed while other government accounts were allowed to stand. A proper explanation would also make clear how such decisions would impact future developments (e.g. new military accounts). Content decisions involving state actors deserve a detailed articulation precisely because of their impact on human rights and the public interest.

There is an increasing turn toward eradicating “coordinated inauthentic behavior” on Facebook, often associated with foreign electoral interference. But Myanmar military-linked accounts have also been removed for violating Facebook’s policies against misrepresentation and coordinated inauthentic behavior. Importantly, Facebook views these harms not as “content violations” but rather, as a matter of conduct. The harm lies in deceiving other Facebook users as to the nature of a rogue user’s engagement through a concerted use of fake Facebook accounts. The policy seeks to prevent “bad actors” from manipulating the platform and mislead others “about who they are or what they’re doing.” This would enable people to “trust the connections they make.”

Enforcement of Facebook’s “coordinated inauthentic behavior” policy is a process shrouded in secrecy. It can also enable deflection of public scrutiny by diverting the focus of the harm and avoiding the question of content altogether. This can allow Facebook to sweep hard content issues under the rug and achieve particular outcomes without necessarily delving into the complex content questions arising from supposedly deceptive engagement.

The Case for More Transparency in Content Decisions

As we now see in Myanmar as well as other emerging markets that served as a laboratory for Facebook’s experiments, Facebook’s exercise of discretion continues to haunt the present. Facebook’s content decisions toward state actors in Myanmar obscure rather than illuminate, and more transparency is called for if Facebook is serious about consistent global enforcement of its Community Standards. Facebook should begin with unequivocally enumerating all the different content policies implicated by problematic content, instead of invoking only one applicable rule in piecemeal fashion. It should explain why a particular content policy violation of a state actor merits a certain response that distinguishes it from other related cases. This explanation would support human rights accountability initiatives, such as Gambia’s ICJ case against Myanmar, by signaling to victims and rights groups potentially useful information for further digging. Not least, it would allow Facebook’s stakeholders to reflect deeply about the competing harms on the platform under constant negotiation, and it would empower them to meaningfully participate in discovering solutions.

Image: Patrons gather for refreshment at a tea shop in Yangon on August 31, 2018 where many hangout to chat and browse Facebook on their mobile phone. (SAI AUNG MAIN/AFP via Getty Images)