Corporate pronouncements are usually anodyne. And at first glance one might think the same of Facebook’s recent white paper, authored by Monika Bickert, who manages the company’s content policies, offering up some perspectives on the emerging debate around governmental regulation of platforms’ content moderation systems. After all, by the paper’s own terms it’s simply offering up some questions to consider rather than concrete suggestions for resolving debates around platforms’ treatment of such things as anti-vax narratives, coordinated harassment, and political disinformation. But a careful read shows it to be a helpful document, both as a reflection of the contentious present moment around online speech, and because it takes seriously some options for “content governance” that – if pursued fully – would represent a moonshot for platform accountability premised on the partial but substantial, and long-term, devolution of Facebook’s policymaking authority. 

To be sure, for many people, any reservoir of goodwill for Facebook has run so low that anything short of a forced restructuring – whether by way of antitrust action, sweeping regulation, or some combination thereof – represents tea so weak as to be water. A look back through the company’s history reveals almost too many scandals to count, implicating a laundry list of privacy violations, controversial content moderation standards, and questionable business practices. The incremental, often open-ended forms of regulation that the white paper treats as viable might thus be dismissed out of hand. But part of designing institutions is recognizing that participants will be imperfect, even untrustworthy – indeed, that’s why institutions are created and reconfigured. Having an open mind to proposals like these is justified by two important sensibilities – first, that ideas advancing the public good can align with others’ motives; and second, that more or less voluntary adjustments need not preclude the contemplation of heavier-handed ones. 

Rights and Public Health

Facebook’s white paper starts from a now-familiar premise that the rise of social media has occasioned both good and bad. There’s been a blossoming of free speech online, giving rise both to productive new forms of social, economic, and political engagement, and also proliferation of harmful content – sometimes implicating physical harm – on an unprecedented scale. Content moderation by platforms like Facebook– and regulation aimed at ensuring its proper functioning – must seek to preserve the benefits of open discourse while mitigating the worst of these harms, the white paper says.

This central challenge is worth unpacking, particularly because the “costs and benefits” framing hasn’t been received wisdom for long. Elsewhere, including in an article cited by the white paper, we’ve written about efforts to strike a balance between free speech and harm mitigation in terms of a clash between two distinct eras of discourse around digital governance. In the first, an “era of Rights” stretching from the dawn of the commercial internet, technology companies and civil liberties advocates – joined, in some countries including the United States, by legislators and courts – pursued norms and legal protections oriented towards safeguarding online speech against external interference, particularly by governments. To a proponent of the Rights-era tradition, most forms of harmful content – particularly those not easily defined – are best accepted as unsavory but ultimately justified costs of freedom. “We cannot separate the air that chokes from the air upon which wings beat,” intoned John Perry Barlow in 1996’s “A Declaration of the Independence of Cyberspace.”

The explosive growth of the breadth and significance of online speech, and new options for platforms like Facebook to shape its course, has put this argument to the test. Starting about 10 years ago, and spurred in large part by the emergence of a few dominant social media platforms, a competing second era dawned – that of “Public Health.” The Public Health era has been marked by an emphasis on the aggregate harms emergent from online speech – including those so large that they ironically are only observable (if at all) on the population or societal level, like political extremism, changes in vaccination preferences, or decaying trust in democratic institutions. Public Health-era advocates have rallied around the notion that today’s technology companies have facilitated, and therefore effectively caused, serious societal problems in the process of enriching themselves, and that – with proper accountability measures in place – they would be more motivated to address content-related harms, or at least bear more of their cost.

But the Rights era never ended, even as the Public Health era seized center stage – and that’s what makes the balancing question with which Facebook’s white paper opens so difficult. Any public health-oriented effort to influence platforms’ content governance policies, for the sake of mitigating harms, risks incentivizing excessive removal of content, and – over time, as new harms are identified – iteratively narrowing the domain of acceptable speech to an extent that meaningfully, and wrongly, chills discourse. And the involvement of government in setting standards for content moderation – as contemplated by the Facebook white paper – implicates thornier questions still, particularly given the imprint of Rights-era thinking on the law and policy landscape. In the United States, for example, many forms of government involvement would, and should, be compelled to abide by the limits of the First Amendment. 

A New Era for Content Governance?

How, then, might we move toward accountability in the face of irreconcilable clashes between Rights-era and Public Health-era values, particularly given the serious practical and civil liberties concerns surrounding most forms of government involvement in content moderation? Platforms, regulators, and the public at large should embrace a third era, one of Process, in which content moderators might transcend the clash between rights and public health by focusing on developing transparent, public interest-oriented, and public-involving dispute-resolution mechanisms capable of achieving broad legitimacy. The intention of such an approach would be to ensure that even those on the losing side of a content-related dispute could have confidence in the integrity of the decision process. 

We might start by recognizing the unusual power of someone at a place like Facebook who bears the title of “Head of Global Policy Management.” If General Motors has someone in charge of public policy, it’s referring to the firm’s lobbying shop. Here, however, public policy refers to the governance of speech and privacy for billions of people. In a Financial Times op-ed that accompanied the white paper’s release, Mark Zuckerberg himself reflected on the oddity: “I don’t think private companies should make so many decisions alone when they touch on fundamental democratic values.” 

An effective process-based approach to content governance will require a significant and high-profile devolution of authority by the platforms. The platforms’ profit-oriented incentives, ongoing user-trust issues, and understandable discomfort with (and perhaps, from a corporate perspective, comparative disinterest in) being asked to resolve values-based questions, key decisionmaking processes will need to be shifted to outside entities. We’re already seeing experiments along these lines. Facebook has spent months of design and consultation building an Independent Oversight Board – an external body of specialists, supported by a trust designed to insulate the Board from Facebook’s undue influence, with the power to review and overturn content moderation decisions made by Facebook.

Facebook’s white paper suggests an approach to governmental regulation of content moderation that would stand to charter further experiments. It repeatedly invokes “procedural accountability” – the idea that government should hold platforms accountable for “having certain systems and procedures in place.” Rather than prescribing specific processes or standards by which speech should be judged, a procedural accountability approach to regulation could simply require that companies develop practices essential to a transition into the Process era. The white paper suggests, for example, that regulators might 

incentivize – or where appropriate, require – additional measures such as… A channel for users to appeal the company’s removal (or non-removal) decision on a specific piece of content to some higher authority within the company or some source of authority outside the company.

Importantly, the open-ended procedural focus of such a regulatory approach could also serve to mitigate civil liberties concerns relating to government meddling with speech, and might even stand a chance of surviving review under the First Amendment.

Of course, this same open-endedness would risk enabling inaction in the form of uninspired tweaks aimed at satisfying a minimal interpretation of the letter of the law. That’s particularly the case when the cost of failed experiments is high – as it often is, when freedom of speech is implicated. But there are reasons to be cautiously hopeful or at least intrigued. Facebook’s investment in the Independent Oversight Board demonstrates a recognition that it can’t and shouldn’t handle content governance alone, and some degree of willingness to assume high-profile risks in pursuit of a solution. And in the many parts of the world where governmental intervention can be more direct and heavy-handed – for lack of a First Amendment, among other things – finding satisfying process-based solutions might be, for the platforms, a means of heading off the imposition of cumbersome regulation. 

Efforts to resolve the challenges implicated by content moderation through process-oriented solutions will have to weather continual controversy as the solutions are incrementally rolled out, and ideas that look promising on paper meet any number of real-world barriers and skepticisms. Facebook’s Independent Oversight Board will more likely than not be dragged over the coals for the composition of its membership, the manner in which Facebook implements (or fails to implement) its precedents, and its (at least initially) quite limited scope. Indeed, such dragging is an indispensable step in the development of ambitious new governance structures. To continue making progress towards a Process-era model of content governance, Facebook (or the Board) will need to respond by getting right back on the horse – tweaking, iterating, and refining in the face of yet more criticism, while simultaneously piloting additional process-oriented solutions to other content-related problems. (The vetting of political ads, currently not assessed by the company for truth at all, may be a good place to start.) 

This cycle of experimentation and failure will be inevitable – for Facebook and for other social media companies seeking to push into the Process era – but a regulatory approach of the sort championed by Facebook’s white paper may help to accelerate their progress (without eliminating the possibility of firmer action). Process-oriented mandates might provide a structure in which the messy business of institutional innovation can be carried out and explained to users, an invitation on the part of regulators to explore more sustainable models for corporate power in the context of social media. For their part, platforms should aim to be maximally communicative and collaborative with their users and the broader public. This will mean candidly discussing successes and failures, releasing data about their moderation efforts to independent researchers, and (for the sake of fast iteration, and minimizing adverse consequences) exploring ways of transparently running governance experiments in limited form.

There’s no guarantee that Process-era innovations will fix content governance, but we would surely be better off trying, failing, and iterating – starting sooner rather than later – than accepting a deeply unsatisfying status quo. With luck, the Facebook white paper will spur a renewed conversation around the role of government in enabling such innovation.

Image: Justin Sullivan/Getty Images