Facebook’s Content-Decision Oversight Board Carves Out Own Territory

As the U.S. Senate takes tech CEOs to task, first this week before the Commerce Committee and again Nov. 11 before the Judiciary Committee, Facebook’s two-year-long effort to establish a check on decisions to remove speech from its platform finally kicked into gear. The Facebook Oversight Board, established at the end of 2019 to hear appeals of the company’s content-moderation decisions, announced that it is ready to begin hearing cases. While the board will only review a sliver of Facebook decisions that determine the contours of online public debate, it has the potential to bring some measure of accountability to tech’s virtually unchecked power. Already, the board has taken steps to bolster its independence.

Human Rights First

The board made clear in its  Oct. 22 announcement that it will not be constrained by Facebook’s internal rules (community standards), but rather will also examine whether these rules conform to international human rights standards. The board explained it envisions itself as gradually pushing the company to have “a clearer basis for its decisions in human rights norms.”

Facebook itself has shied away from explicitly stating that its content-moderation regime was subject to human rights standards. The board documents prepared by the company have instead prioritized references to its community standards and values (such as “voice,” “authenticity,” “safety,” “privacy,” and “dignity”). The board’s charter does instruct the board to “pay particular attention to the impact of removing content in light of human rights norms protecting free expression,” and the bylaws require the board to report on how its decisions have considered or tracked the international human rights related to a case.

By building on these references and explicitly adopting the established human rights framework as its foundation, the board has shored up its legitimacy and credibility. Human rights standards, while not necessarily easy to apply, have the virtue of international acceptance, stability (Facebook can unilaterally change its community standards and does so regularly), and an established body of law to guide the board in making inevitable tradeoffs. Building out an international human rights framework for content moderation also would be a service to the field and resonate beyond Facebook to the range of companies that suddenly find themselves in the business of monitoring their users’ speech, ranging from Zoom to Spotify.

It is also worth noting that while the board’s charter, as drafted by Facebook, prioritizes free expression, the board has indicated that it will look at human rights issues more broadly. The announcement states:

We expect that our decisions will address a variety of freedom of expression and human rights concerns that arise from content moderation. This includes instances where the expression of some may silence or endanger others, or in turn where expression may be threatened.

In recent years, commentators have sought to adapt ideas of free expression to incorporate the reality of the modern internet, where an individual’s ability to speak can be harmed by other voices drowning them out. This conception of free expression seems to be one that the board finds compelling and would be a departure from Facebook’s generally narrower approach to free expression.

Unfortunately, the company’s control of the board’s docket means that some of the critical cases where Facebook’s actions conflict with human rights standards will not be reviewed. The board is not authorized to review content taken down due to local law. This is a major failing, as we have argued before:

While the company has legal obligations in countries where it operates, there is a strong case to be made that where these laws violate international human rights norms (e.g., by criminalizing homosexuality and feminism), the board should be able to weigh in and Facebook, in the implementation phase, could explain that local laws prevent it from complying with the board’s decision and how it has done so in [the] most narrow way feasible given legal constraints (e.g., by removing the content only in the country where it is illegal).

Board Infrastructure

The board has taken overt steps to distinguish itself from Facebook. It has established its own website with distinct graphics and a unique user interface, and has hosted press calls independent of the company. It has also made efforts to separate its infrastructure, launching its own portal for user appeals. In addition to reaching the board through Facebook or Instagram, users seeking to appeal a content moderation decision can do so directly on the board’s website.  However, the board will continue using a case management tool created by Facebook, which means that anyone seeking to appeal would not have a mechanism to bring a case to the board that is not controlled by the company.

The board is permitted to make amendments to its governing documents (some require the consent of the company), and although none have yet been publicly announced, they are widely expected.

We have previously noted that the trust set up by Facebook to oversee the board’s budget and administration holds the potential for influencing the board’s operations through its role in appointing and removing members, approving funding, making amendments to the governing documents. Some changes have been made to the trust agreement, but they do not impact its influence on the board.

Transparency

The board has promised transparency beyond the requirements of its governing documents. In addition to publishing the required annual report, it will publish brief, anonymized descriptions of the cases under review. And, for each case, the board will open a public comment period before deliberations to allow third parties to submit feedback. This creates a clear avenue for civil society to provide broader context that may be useful to the board. The value of the opportunity, both for those seeking to make their views heard and the board itself, will depend to a great extent on whether the board’s case descriptions are sufficiently detailed.

These are modest steps to be sure, but they are welcome signals from a board that will have such a pivotal role in public discourse and yet has often been the subject of skepticism about its ability to stand apart from the company that created it.

IMAGE: Facebook CEO Mark Zuckerberg appears on a monitor behind a stenographer as he testifies remotely during the Senate Commerce, Science, and Transportation Committee hearing ‘Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?’, on Capitol Hill, October 28, 2020 in Washington, DC. CEO of Twitter Jack Dorsey; CEO of Alphabet Inc. and its subsidiary Google LLC, Sundar Pichai; and CEO of Facebook Mark Zuckerberg all testified virtually. Section 230 of the Communications Decency Act guarantees that tech companies cannot be sued for content on their platforms, but the Justice Department has suggested limiting this legislation. (Photo by Michael Reynolds-Pool/Getty Images)

 

About the Author(s)

Hecht-Felella Laura

George A. Katz Fellow at the Brennan Center’s Liberty and National Security Program at NYU School of Law; previously worked at Brooklyn Legal Services, where she represented low-income New Yorkers in litigation seeking to prevent displacement and preserve affordable housing. Follow her on Twitter (@laur_hf)

Faiza Patel

Co-Director of the Liberty and National Security Program at the Brennan Center for Justice at NYU School of Law, Former Senior Policy Officer at the Organization for the Prohibition of Chemical Weapons. Member of the editorial board of Just Security. Follow her on Twitter (@FaizaPatelBCJ).