The Facebook Oversight Board, in deciding its first cases, overturned five out of six of the company’s decisions. While the board’s willingness to depart from its corporate creator’s views is noteworthy, the bigger message is that Facebook’s content-moderation rules and its enforcement of them are a mess and the company needs to clean up its act.

Indeed, many of the issues raised by the board reflect longstanding criticisms from civil society about Facebook’s content-moderation scheme, including the company’s use of automated removal systems, its vague rules and unclear explanations of its decisions, and the need for proportionate enforcement. Facebook’s ongoing inability to enact a clear, consistent, and transparent content-moderation policy may well lead the board to overturn Facebook’s decision to bar former President Donald Trump, a case that the company has voluntarily brought to the board.

Unreliable Algorithms

Facebook’s removal of an Instagram post about breast cancer (which the company conceded was incorrect) was used by the board as an opportunity to express concerns about the company’s use of automation, as well as the sweep of its policy against nudity. Automated removals have long been criticized as more susceptible to error than human reviewers. For example, in the context of Covid-19, algorithms mistakenly flagged posts of accurate health information as spam, while leaving up messages containing conspiracy theories.

These mistakes particularly affect Facebook users outside western countries, since Facebook’s algorithms only work in certain languages and automated tools often fail to adequately account for context or political, cultural, linguistic, and social differences. Facebook’s treatment of female breasts as sexual imagery has also been a contentious issue for many years – even photos of breastfeeding mothers and mastectomy scars displayed by cancer survivors were routinely removed until intense public pressure forced a policy change.

The board identified several harms resulting from the company’s automated enforcement of the nudity policy: interference with user expression; disproportionate impact on female users (the company allows male nipples); and, given the important public health goal of raising awareness about breast cancer, women’s right to health. To ameliorate these harms, the board recommended that Facebook improve its automated detection of images that contain text-overlay (in this case, the algorithm had failed to recognize the words “Breast Cancer” that appeared as part of the image). The board also recommended the company audit a sample of automated enforcement decisions to reverse and learn from mistakes, and include information on automated removals in its transparency reports.

The Oversight Board also made important suggestions for ways that Facebook can improve the process available to users whose posts are removed, including notifying the user of the specific rule they have violated and of the use of automation, and providing users with the right to appeal to a human reviewer. Many of these suggestions are similar to those set out in the Santa Clara Principles, a civil society charter that in 2018 outlined minimum standards for companies engaged in content moderation.

Vague Rules

In another case, the board took aim at Facebook’s Dangerous Individuals and Organizations Policy. It overturned the removal of a post quoting Joseph Goebbels. The company had internally designated Goebbels as a dangerous individual and the Nazi party as a hate organization. The underlying policy, however, was actually developed as a response to calls from the U.S. and European governments for social media companies to do more to combat ISIS and al-Qaeda propaganda.

As the U.N. Special Rapporteur for Counterterrorism and Human Rights and various civil society groups have pointed out, the policy fails to identify all the groups and individuals that the company considers dangerous, but has had a near-exclusive focus on content related to ISIS and al-Qaeda, placing Muslim and Middle Eastern communities and Arabic speakers at greater risk for over-removal. Moreover, if the company’s removals follow the pattern of the GIFCT consortium in which it participates, the vast majority of removals would be for the most ambiguous types of posts: those that “praise” or “support” a listed organization.

In deciding the case, the board focused on specifics about the post to reach the conclusion that it “did not support the Nazi party’s ideology.” But it also found that Facebook’s policy on Dangerous Individuals and Organizations failed to meet the international human rights requirement that “rules restricting expression must be clear, precise and publicly accessible.” The policy was not sufficiently “clear, precise, and publicly accessible” because it did not explain the meaning of key terms such as “praise” and “support,” list the individuals and organizations that have been designated as “dangerous,” or make clear that Facebook requires users to affirmatively spell out they are not praising or supporting a quote attributed to a dangerous individual. The board recommended that Facebook clarify the terms of its policy and publish a list of dangerous organizations and individuals to close the “information gap” between the publicly available text of the policy and the internal rules applied by Facebook’s content moderators.

The theme of lack of clarity was echoed in several of the cases discussed below as well. In the Covid-19 decision, the board found Facebook’s vague rules about misinformation and imminent harm did not comply with human rights standards because the “patchwork of policies found on different parts of Facebook’s website make it difficult for users to understand what content is prohibited.”

Context is Key

In several cases, the board leaned on the geopolitical context of a post to reach its decisions. These cases illustrate that the board’s selection of the relevant “context” to consider – which is not explained in its decisions – is often determinative of the outcome of the case, and the board has tended to view the relevant context more narrowly than has the company.

The board overturned Facebook’s removal of a post from Myanmar which pointed to “the lack of response by Muslims generally to the treatment of Uyghur Muslims in China, compared to killings in response to cartoon depictions of the Prophet Muhammad in France” to argue that there is something wrong with Muslims’ mindset or psychology. The company had acted under its hate speech policy, which prohibits generalized statements of inferiority about a religious group based on mental deficiencies.

The board, however, concluded that statements referring to Muslims as mentally unwell or psychologically unstable, while offensive, are “not a strong part” of the “common and sometimes severe” anti-Muslim rhetoric in Myanmar. If the board had taken a wider view of context, it could well have reached the opposite conclusion. Facebook’s failure to control anti-Muslim hate speech in Myanmar has been linked to the genocide of Rohingya Muslims in the country, violence that continues to this day.

The board also overturned Facebook’s removal of a post criticizing the French government for refusing to authorize the use of hydroxychloroquine, which the user called a “cure” for Covid-19. Because the drug is not available in France without a prescription and the post does not encourage people to buy or take drugs without a prescription, the board determined that the post did not create a risk of imminent harm, as required by the violence and incitement policy under which it was removed. Here again, if the board had looked at the broader issue of misinformation around Covid-19, or even around hydroxychloroquine, it could well have reached the opposite conclusion.

In a decision released on February 12, 2021, the board overturned Facebook’s decision to remove a post from India that the company had treated as a veiled threat prohibited under its violence and incitement policy. The post from October 2020, depicting a sheathed sword, said “if the tongue of the kafir starts against the Prophet, then the sword should be taken out of the sheath,” and the message included hashtags calling for the boycott of French products and calling President Emmanuel Macron of France the devil.

For Facebook, the relevant context was “religious tensions” in India related to the Charlie Hebdo trials occurring in France at the time of the post and elections in the Indian state of Bihar, which were held from October through November, as well as rising violence against Muslims and the possibility of retaliatory violence by Muslims. A majority of the board looked at the same events, but with greater specificity: the protests in India following Macron’s statements were mostly nonviolent and the elections in Bihar were not marked by violence against persons based on their religion. Moreover, while the board viewed violence against the Muslim minority in India as “a pressing concern,” it did not give the same weight to the prospect of “retaliatory violence by Muslims.” Overall, the majority interpreted the references to the boycott of French products as a call to “non-violent protest and part of discourse on current political events.”

Proportionality

The board also grappled with the issue of proportionality. In the Covid-19 case, it found that Facebook’s removal of the post was not proportionate because the company did not explain how removal constituted the least intrusive means of protecting public health.

In another case, in which the board upheld Facebook’s removal of a racial slur that dehumanized Azerbaijanis, the board split on the issue. The majority concluded that Facebook’s removal was proportionate because less severe interventions, such as placing a label or a warning screen on the post, would not have provided the same protection against offline harms, the risk of which was particularly severe because of an ongoing armed conflict in the Nagorno-Karabakh region.

The minority of the board – whose opinions were summarized in the decision – argued otherwise. One member thought the risk of violence was relatively remote and, given that the removal of the post led to the takedown of speech on a matter of public concern, less-intrusive measures should have been considered. Another member believed that the post would not contribute to military or other violent action. It is difficult to distinguish between the two cases, except perhaps on the basis of the type of harm at issue: the prospect of physical violence in the Azerbaijan case versus the more diffuse threat of disinformation about Covid-19.

Implications for the Trump case

Figuring out what these decisions mean for what is likely to be one of the board’s biggest cases – its review of Facebook’s decision to indefinitely suspend Donald Trump from the platform after removing two missives he posted during the Jan. 6 riot at the U.S. Capitol – is like reading tea leaves. Both posts instructed the rioters to “go home,” but also reiterated Trump’s false assertions that the election had been “stolen from us” and “unceremoniously viciously stripped away from great patriots who have been badly unfairly treated for so long.”

In the Goebbels decision, the board found that Facebook’s rules on Dangerous Individuals and Organizations – the basis for removing Trump’s posts – failed the international standard of legality. The board has also been solicitous of the need to avoid interfering with public discourse (e.g., on government policy on Covid-19 or objections to Macron’s treatment of Muslims in France). That concern is particularly weighty when discussing the speech of the president of the United States.

Context, which has played such an important role in the board’s decisions thus far, will undoubtedly be key. But in the case of Trump, it probably will not matter whether the board looks at the long arc of his attempts to undermine the election and rile up his followers or only the events of Jan. 6. Both show the danger he posed.

Much is likely to hinge on how the board evaluates the proportionality of Facebook’s decision with regard to the indefinite suspension. Aside from an outright ban, an indefinite account suspension is one of Facebook’s most severe enforcement tools, particularly when compared to post removals, labels, warning screens, or other measures it might take to reduce dissemination. And Facebook has not publicly explained the grounds on which it suspended his account, except to say that the suspension, when weighted against the values underpinning its Community Standards (voice, authenticity, safety, privacy, and dignity), was “necessary and right” in order to prioritize “safety in a period of civil unrest in the US with no set end date.” This seems thin grounds for a momentous decision, especially when it is being reviewed by a board that has placed so much emphasis on the need for clear rules.

At the end of the day, though, as the Knight Institute pointed out in its excellent submission to the board, the bigger issue is not whether Trump was rightly kicked off Facebook, but about the company’s responsibility for its “decisions about design, which determine which speech proliferates on Facebook’s platform, how quickly it spreads, who sees it, and in what contexts they see it.” Although the company has sought to exclude this issue from its jurisdiction, the board must push Facebook to address it. Otherwise it will just be addressing the symptoms of the problem, not the cause.

IMAGE: A man browses Facebook on his smart phone after the mobile internet went back online in Kampala, Uganda, on January 18, 2021. (Photo by YASUYOSHI CHIBA/AFP via Getty Images)