With Facebook’s unveiling of the first 20 (of an expected 40) members of its newly minted Oversight Board for content moderation, it’s a good moment to take a step back – evaluating the larger picture and what these specific individuals, and the institution they are helping create, can now offer.

A fundamental question arises in this moment: Do social media companies have the power of governance?

It is hard to avoid seeing that power in the age of COVID-19. Twitter took down tweets of the president of Brazil, Jair Bolsonaro, whose posts it decided were contrary to public health guidance. Facebook began taking down event pages of anti-lockdown protests violating social distancing strictures. Not to be outdone, YouTube began demonetizing content that promotes virus misinformation.

The power social media companies exercise in this time of COVID is like old wine in new bottles. The platforms have long influenced public debate worldwide, displacing, in some places, traditional media and even government figures. They organize themselves bureaucratically, with rule-makers who define what’s legitimate to post, share, and like, and enforcers who decide when to take content down.

Massive influence? Check. Legislative power? Check. Executive power? Check.

Now one of them has a court.

Today, Facebook announced the first panelists – the judges of what Mark Zuckerberg once, perhaps to his regret, called the Facebook Supreme Court – of its newly created Oversight Board. An external body with the power, according to its draft charter, “to reverse Facebook’s decisions about whether to allow or remove certain posts on the platform,” the group is more impressive than a skeptic could have imagined. Its participants may lean toward the United States and Europe, but there is global participation.

I know some of the panelists as friends and colleagues, several who are major figures in the world of human rights law and advocacy. As examples:

  • Catalina Botero, one of the four co-chairs, is the former Special Rapporteur for freedom of expression in the Inter-American Commission for Human Rights and a leading jurist in Colombia.
  • Maina Kiai is a former United Nations Special Rapporteur on freedom of peaceful assembly and association, and a leading figure in Kenyan civil society.
  • Evelyn Aswad, a law professor at the University of Oklahoma, was a key member of the State Department’s legal office dealing with human rights issues and has written trenchantly about the role of human rights in content moderation.
  • Julie Owono leads the Paris-based Internet Sans Frontiers and knows as much as anyone about digital rights, especially in Africa.
  • Nighat Dad has fought for digital rights as a lawyer in Pakistan for years and is deservedly well-known internationally for her brave advocacy for online freedom of expression.
  • Nicolas Suzor, a gifted academic in Australia, wrote a key book on social media with a title (“Lawless”) that makes you wonder how he got through the selection gauntlet.

I could go on, but this gives a sense of the commitment to human rights and rule of law that participants bring to the endeavor.

Platforms Creating Their Own Regulations

All of which leads to the question: How did we get here, to this place where companies so dominate public speech, causing so much friction with governments and the public, and yet, as Chinmayi Arun describes so well, they must create their own mechanisms of self-regulation? Where is government oversight promoting and protecting democratic principles? Why should private companies be making, and then overseeing, the decisions that have such impact on public life?

Like all of the dominant companies, Facebook began with a light set of rules, focused not on governing speech but on encouraging users to join up, add content, expose their interests and personalities. Such content, after all, feeds the platform, the human engagement and personal data that all the companies convert to advertising dollars.

But as all of the platforms grew, their impact leapt from the iconic but trivial to public forums for politics and organizing – and hate, harassment, and silencing. They all seemed to make decisions about specific content only under the pressure of scandal, and it was pressure in the largest markets that forced change.

Communities lacking the power of American and European markets suffered neglect, nowhere more so than Myanmar, where Facebook did virtually nothing in response to hate speech and incitement to genocidal violence against the Rohingya in Rakhine State.

Platform rules expanded in an effort to meet the challenges of online hate, harassment, and disinformation. But these actions were, and remain, insufficient for many. Even with new forms of transparency reporting, decision-making under the rules has remained stubbornly opaque, reinforcing a view that the companies operate according to their own interests, not the public’s.

Online speech problems, meanwhile, are compounded by scale, which incentivizes the use of automation and a global workforce driven to review the muck and mire of online expression. Moreover, while governments in the United States and Europe have done little to regulate the companies, authoritarians increasingly impose onerous demands on the platforms and on those who use them for expressive purposes. These include so-called false information laws with severe penalties for what people post online, with particular impact on journalists, bloggers and critics of government officials and policy.

Input and Independence

It is in this environment that Facebook first announced the creation of the Oversight Board late in 2018. Since then, the company commissioned an outside evaluator to conduct an assessment of the potential human rights impact of the board. It conducted global consultations to ensure buy-in from activists and thinkers worldwide. In my role as the United Nations’ principal human rights monitor for freedom of expression, I provided input as well.

Most importantly, it has funded the board with a trust of $130 million, hired experienced leadership rooted in human rights advocacy, and promised the Board independence from Facebook and a commitment to carry out the board’s judgments.

For now, we know the rules the board must observe. The effort has been transparent and, in its way, pathbreaking. The Oversight Board deserves time to succeed, to decide hard cases in ways that the company may not like. That most likely won’t happen before the end of this year, at the earliest. Among the key questions will be just how far the board is willing to go in taking decisions that undermine the company’s business interests, just how broad a scope it believes it has, just how independent from Facebook it is. All that will need to be monitored as the process moves forward.

And this does not end the debates over online speech. For one thing, the board will not have jurisdiction over the legal demands imposed by governments. When, for example, Singapore or Turkey or Egypt or France (or any other government) demands a takedown of content under its domestic law, that appears to be beyond the scope of the board.

What happens when a government complains about a decision of the board? Who wins that fight?

Further, difficult content problems often take place at local levels, in languages and code that may be impenetrable to those outside. Will the board ever have the bandwidth to address the massive impact Facebook will continue to have in communities worldwide? Will the board, in other words, be more like a Band-Aid on a massive wound than an appellate body to solve the crises of online speech?

And as laudable as Facebook’s effort is, it only solves Facebook’s problems of legitimacy. That is, it could help legitimize Facebook decisions but cannot legitimize content-moderation choices by all platforms. Neither can it legitimize practices that many find objectionable, such as the increased use of automation to make decisions about content. If the board’s decisions are rooted in the kinds of human rights standards that individuals around the world cherish, if they genuinely absorb the input of communities worldwide, Facebook’s legitimacy and influence may rise.

But that could come at an expense that other platforms, or platforms yet to be created, cannot afford. These other platforms, which also have massive impact on public speech, remain outside. Over time, an industry-wide process would build trust in content moderation and push them all toward transparency and respect for the public impact they have. It may even be that the Facebook Oversight Board could expand to take on that kind of industry-wide role. If it provides the possibility for small innovators to join at a lower cost, all the better. If it merely locks in the power of those companies that can afford it, it would be a net loss for innovation.

And even then, will the board – Facebook-only or beyond – be enough for governments and for the public worldwide? Governments are chomping at the bit of regulation, seeking to impose content rules that the platforms must follow. Will self-regulation, even with tools like the Oversight Board, block that momentum? Will the Oversight Board spur the company and others to greater transparency, to enable genuine democratic oversight and control?

Facebook is taking a significant step toward self-governance. It may be part of the answer to the problems of online speech and censorship today. But ultimately, it is only one part.

IMAGE: Getty Images.