The decision by the Facebook Oversight Board on whether former President Donald Trump should be allowed back on the platform demonstrates both the extent and the very real limits of the board’s authority. Operating under the mandate of a private company, the board sat in judgment on its creator’s decision to banish its most famous user, who at the time was the most powerful person in the world.

At the same time, the board was unable to get answers from Facebook about the role that its algorithms and policies played in promoting lies about the 2020 election, the information ecosystem within which Trump operated, or the influence of political actors on the company’s decisions. The board made critical recommendations: that Facebook should reckon with its own role in amplifying content and overhaul its approach to high-reach accounts. Since these are not binding, the ultimate test of the board’s influence will be how the company responds.

The Board’s Decision

As in its prior decisions, the board reviewed Facebook’s decision to suspend Trump’s account on Jan. 6 and the continued indefinite suspension under three rubrics: Facebook’s Community Standards, Facebook’s Values, and international human rights law.

The board concluded that Trump’s Jan. 6 posts violated Facebook’s Dangerous Individuals and Organizations Community Standard, because Facebook had designated the Capitol attacks as “violent events” and Trump’s comments — “We love you. You’re very special” in the first post and “great patriots” and “remember this day forever” in the second — demonstrated praise of those events. The board also found that the suspension of Trump’s account was consistent with Facebook’s values of “Voice” and “Safety,” with the imminent safety concerns presented by the attacks outweighing the interest in preserving freedom of speech.

Under international human rights law, the board assessed whether the suspension met the test of legality, purpose, and proportionality. Legality requires that a decision be made pursuant to a clear rule. The board found that the Dangerous Individuals and Organizations Standard was sufficiently clear as to give notice that Trump’s posts were violations, but because neither Facebook’s Community Standards, its Terms of Service, nor other public pronouncements by the company mentioned indefinite suspension as a potential consequence of policy violations, the penalty Facebook imposed did not meet the standard of legality.

The board found the other two requirements of human rights law were met. The policy restricted free expression for a legitimate purpose because it sought to protect “public order, as well as respect for the rights of others, including the rights to life, security, and to participate in elections and to have the outcome respected and implemented.” Finally, proportionality requires that restrictions on speech be accomplished by the least restrictive means. Referring to the factors set out in the Rabat Plan of Action, the board determined that the initial suspension was proportional due to the imminence and severity of the contemplated harm, as well as Trump’s status as head of state and the size of his audience.

The board directed Facebook to revisit the suspension of Trump’s account and impose a penalty that is supported by a clear, publicly available policy. Facebook must issue a new penalty within six months and respond to the non-binding policy recommendations offered throughout the decision within 30 days. Due to current limits on the board’s jurisdiction (users cannot appeal account suspensions and bans, only decisions on individual posts), board members have clarified that Facebook itself has the option to elevate any new penalty to the board for review.

Facebook Rules and Human Rights Standards

The board’s prior decisions have emphasized the importance of clarity in rules and predictable outcomes. In the Trump case, by concluding that Facebook could not impose a penalty that was not mentioned in its rules — “indefinite suspension” — the board continued pushing Facebook towards a rules-based system. At the same time, the board bypassed important opportunities to address ambiguities in Facebook’s existing policies, including some that it had previously highlighted.

Facebook’s Dangerous Individuals and Organizations policy prohibits praise and support of designated groups and persons, as well as events that the company deems to be “violating.”  The board glossed over Facebook’s use of “violating event” to justify suspending Trump. The board may have felt that it was obvious that the attack on Jan. 6 was a violating event and wanted to preserve Facebook’s flexibility to respond to a safety threat emerging in real time. But in construing other rules, the board has demanded that Facebook articulate clear definitions that provide users with notice of prohibited content. The board’s recommendation that Facebook create a policy for addressing “crises or novel situations where its regular processes would not prevent or avoid imminent harm” seems to be directed at ensuring that the company provides users with greater notice when responding to similar events in the future.

Nor did the board take Facebook to task for its failure to implement prior recommendations to clarify ambiguities in the Dangerous Individuals and Organizations policy, including defining “praise” and “support,” as well as providing information on how it designates individuals or organizations as “dangerous.” Facebook has been slow to implement past recommendations by the board: it has not committed to a timeline to clarify “praise or support” and claims it is still “assessing the feasibility” of releasing its list of designated “dangerous” organizations.

Finally, the board did not consider the impact of the overlapping policies on the decision to suspend Trump. When it accepted the case, the board seemed to acknowledge that multiple Community Standards potentially applicable to Trump’s posts — Dangerous Individuals and Organizations, Violence and Incitement, and Coordinating Harm — and commentators and some members of the board argued that a different standard should have been applied to Trump’s posts. The decision, however, provides no comment on the difficulty of distinguishing among the rules and identifying which was most applicable.

In contrast, the board was rigorous in its analysis of the penalty provisions of Facebook’s Terms of Service. It noted that there was no basis for the “indefinite suspension” penalty because Facebook only gave notice of three actions that could be taken against an account: removal of content, time-limited suspension, and permanent ban. The board also did not clearly indicate whether it expects Facebook to apply one of the three penalties that exist under current rules, or whether Facebook may create a new “indefinite suspension” penalty, applicable to all users and explained by publicly available criteria for both applying and lifting the restriction. While the board’s decision itself seems to authorize the latter, public comments by board members suggest that they expect Facebook to pick from among the three pre-existing penalties.

In several prior cases, the board overturned decisions to remove posts because it defined the relevant context more narrowly than Facebook. In the Trump case, the board chose a middle path. It didn’t confine itself to just the two posts at issue, but also considered Trump’s persistent attempts to cast doubt on the results of the 2020 election, his position of authority, and his large following. At the same time, it resisted the impulse (advocated by a minority of the panel that considered the case) to take account of the full range of Trump’s earlier posts, such as those implying that racial justice protestors should be met with violence or blaming China for the coronavirus.

Constraints on the Board

Under its charter and bylaws, the board is permitted to ask Facebook for information, but the company has reserved the right to withhold information if it decides that it is “not reasonably required for decision-making.” In the Trump case, Facebook deployed this reservation of authority to prevent the board from addressing highly significant questions.

Many commentators have pointed out that focusing on Trump’s suspension allows Facebook to avoid questions about its own role in promoting lies about the results of the 2020 elections.  Indeed, Facebook has deliberately kept these issues outside the Oversight Board’s jurisdiction, and in this case the company rebuffed the board’s attempts to grapple with them. In particular, Facebook refused to answer the board’s questions about “how Facebook’s news feed and other features impacted the visibility of . . . Trump’s content,” “whether Facebook has researched, or plans to research, those design decisions in relation to the events of January 6, 2021,” and “whether account suspension or deletion impacts the ability of advertisers to target the accounts of followers.” In other words, Facebook was able to block the board from examining matters that affect its bottom line, despite the fact that press reports show that it has internally collected and analyzed at least some of the information the board requested.

By also declining to provide “information about violating content from [Trump’s] followers” or answer “questions related to the suspension of other political figures and removal of other content,” Facebook constrained the board’s ability to place the treatment of Trump in the context of related content and users. This could have been relevant to the context within which the board analyzed Trump’s Jan. 6 posts.

Finally, the company refused to tell the board whether it had been “contacted by political officeholders or their staff about [Trump’s] suspension,” echoing its refusal in an earlier case to disclose whether political officials pressured the platform to make content decisions in their favor. This blocks any possibility of the board considering the issue of how powerful political actors influence the platform, which has long been a concern.

Looking Ahead

The board made several important policy recommendations, which—if Facebook chooses to follow them—could have far-reaching effects. First, the board recommended that Facebook issue a detailed report on its “potential contribution to the narrative of electoral fraud and the exacerbated tensions that culminated in the violence in the United States on January 6.” The board said the report should cover the roles of platform design, policy, and advertising decisions made by Facebook in the weeks leading up to the attacks. If Facebook were to allow a truly independent audit along these lines and publish the results, it could be the first step in reckoning with the true power that the company exercises.

Next, consistent with its emphasis on clear rules, the board made important recommendations relating to how penalties are imposed, including a greater focus on prominent users. The board recommended that Facebook act quickly to enforce its rules “[w]hen posts by influential users pose a high probability of imminent harm,” but insisted on published rules for such actions. The board also recommended greater transparency into how penalties are assessed against high reach users and how its newsworthiness exception—which continues to generate confusion—is applied to preserve content that otherwise violates Facebook’s Community Standards.  The board also recommended reporting on the error rates of its penalty and content-removal systems, which if implemented would shed much-needed light on the accuracy of Facebook’s removal decisions.

Finally, Facebook has long been criticized — including in past board decisions — for prioritizing content moderation for users in the United States and Western Europe while devoting only minimal resources to such efforts in other countries. To ameliorate this imbalance, the board recommended devoting increased resources, including local resources where diverse languages and customs may introduce increased nuance, to content moderation, with a focus on high-reach accounts.

Facebook now has six months to make a call on Trump’s account. But while all eyes will be on that decision, it is critical that we also pay attention to how Facebook responds to the board’s recommendations, particularly with respect to the review of its own role in spreading election misinformation leading to the Jan. 6 attacks and its policy on high-reach users. The company’s response to the board’s decisions thus far can fairly be characterized as lackluster and continuing in the same vein on these critical recommendations could put a big dent in the board’s ability to act as a mechanism for accountability.

IMAGE: In this screenshot taken from a congress.gov webcast, video evidence is presented on the fifth day of former President Donald Trump’s second impeachment trial at the U.S. Capitol on February 13, 2021 in Washington, DC. House impeachment managers had argued that Trump was “singularly responsible” for the January 6th attack at the U.S. Capitol and he should be convicted and barred from ever holding public office again. (Photo by congress.gov via Getty Images)