The de-platforming of President Donald Trump and the associated purge of white supremacist social media accounts has spurred news coverage heralding the “unprecedented” nature of the bans that companies, including Twitter, YouTube, and Facebook, have put in place. Such claims stem from a United States-centric perspective. On a global scale such de-platforming is commonplace. Still, the attention being paid to these recent American bans provides a useful opportunity to highlight the insights already gained in communities where State actors and others have (or, despite calls for the measure, have not) had their accounts suspended or deleted for incitement to violence.

In this article, I focus on two insights in particular: de-platforming as a window on the unequally distributed power and embedded assumptions that determine what content gets to stay online, and de-platforming as policy failure.

De-platforming, unequal power, and embedded assumptions

The specter of de-platforming offers a chance to scrutinize the power dynamics behind content moderation that are too often overlooked. As longtime digital rights advocate, Dia Kayyali pointed out recently: “When platforms weigh priorities, are [five] dead people in Washington DC heavier than all the bodies in India or Myanmar or the many other places states use social media to incite violence?”

Of course, the charge of prioritizing the familiar over the foreign is one that can rightfully be leveled at traditional media too. Proximity to events has long driven what gets front-page attention. But unlike traditional media outlets that are orientated to a given domestic audience, the major U.S.-based social media companies (SMCs) market their platforms to users in Mumbai just as strongly as those in Washington D.C., and so it is reasonable to expect their priorities to reflect this reality. As Kayyali points out, this expectation is not currently being met.

In a forthcoming Harvard International Law Journal article, titled “Governing the Global Public Square,” I develop a set of case studies from the Global South to shed light on the role that economic and political power — coupled with cultural affinity, and distributed unevenly across and within States — plays in the decisions of major SMCs about what content stays on their platforms. The effort builds a counter-narrative to the mainstream U.S. legal scholarship, which is replete with case studies from the United States and Europe that implicitly assume online activity takes place against the backdrop of an offline world where the State is more likely to be a regulator of social media than a user (or abuser) of it, and where the rule of law is in operation.

For too long, the bulk of the regulatory conversation has proceeded by using mainstream Western communities as the empirical basis from which to analyze the ways in which content moderation takes place. This, in turn, has begun to bake in assumptions that work for a decent number of users in the context of a functioning liberal democracy, but have rendered invisible the experiences of many users across the Global South as well as in marginalized communities in the United States and Europe. Such assumptions also, it seems, blindsided these SMCs to the possibility of their platforms being used to successfully incite violence at the U.S. Capitol.

It is not much of a spoiler alert to note that despite recent efforts at reform, major SMCs continue to default to the cultural assumptions and political and economic incentives held by their predominantly white American male founders. Part of this is simply a reflection of capitalism. When the advertiser revenue that Facebook derives from an American user averages at $36.49 per quarter whereas that of a Burmese user comes in at $1.78, one can be dismayed but not surprised that Facebook responded to the deaths of five people in Washington, D.C., last week more quickly than to the deaths of thousands of Rohingya following incitement on Facebook by Myanmar State officials. In the latter scenario, it took years of local activism, coupled with international reporting and a United Nations Commission of Inquiry to finally get Facebook to take the kind of de-platforming action it did in the United States this week.

The profit motive is not, however, the only driver behind the disparate responses to incitement that U.S.-based SMCs demonstrate across their markets. Unevenly distributed political power also plays a role. While all the major U.S.-based SMCs responded rapidly to a call by European regulators to expunge terrorist propaganda from their sites, Sri Lankan government officials were unable to secure a meeting with Facebook as incitement to sectarian violence went viral on its platform.

Another part of the story is about localized cultural competence – or, more accurately, the SMCs decision not to invest the resources to develop such competence. To draw again on the example of Myanmar, Facebook failed to translate its community standards into Burmese before launching its platform in the country. And it was only following allegations of genocide being incited through its platform in Myanmar that Facebook hired a handful of Burmese-speaking moderators (and of course true cultural competence requires much more than mere language fluency). These decisions in turn reduced Facebook’s ability to respond to violence incited on its platform as it was happening.

Going forward, SMCs must be better stewards of their platforms in all their markets –not just where users bring in the highest advertising revenue, not just where regulators are paying attention, and not just in places with cultural and linguistic affinity to the United States. They need to do this because it is the right thing to do, and because failing to do so implicates them in the loss of life.

De-platforming as policy failure

From an academic perspective, de-platforming decisions make for useful case studies. But as a real world matter, they represent a policy failure with severe consequences. Consider two scenarios that lead an SMC to ban a user from their site on account of incitement: Under one scenario, like de-platforming Trump, the ban flows from the fact that real world harm has already been committed. That real world harm provides evidence of the user’s proven ability to incite violence and usually generates a public outcry – both these factors push the SMC to take the drastic step of de-platforming.

When this harm happens close to the nerve center of the SMC, de-platforming can happen as quickly as we saw last week. For those in communities that are more peripheral to the SMC’s priorities, such action may only follow years of advocacy by affected groups. Either way, de-platforming is a response that comes too late from the perspective of those harmed. De-platforming in this scenario represents a failure to prevent violence. The lion’s share of that policy failure falls to the State actors and institutions responsible for the safety of the affected population, but it also reflects a policy failure by the SMCs, who increasingly have their own quasi-governmental departments to monitor risks of violence, but nonetheless continue to give inciting speech a platform.

The other scenario, commonplace within marginalized communities, are bans that flow not from evidence of real world harm flowing from the user’s speech, but from the SMC’s failure to comprehend the context in which the user is operating. It is in this scenario that Syrian human rights activists have found themselves booted from social media platforms for documenting the violence unfolding around them, which the SMCs human and/or algorithmic moderators have (mis)read as inciting violence. The policy failure here lies with the SMCs that did not invest enough resources to acquire the localized cultural competence needed to fairly enforce their own standards in the markets they entered.

De-platforming is a blunt instrument. Even when triggered by genuine incitement, a ban necessarily sweeps up any innocuous speech that a user may also have posted. Claims by Trump supporters that the president has lost his ability to speak through de-platforming are overwrought; when SMCs ban State actors from their sites, those actors still have other media outlets they can use to speak publicly. But for non-State actors, a ban can indeed squash their voice in an overbroad manner. And when such bans are based on an inaccurate understanding of the user’s speech – as has been the case with scores of human rights activists worldwide – the loss of voice compounds the harms often already underway in their communities. In theory, SMC efforts to strengthen the legitimacy of their content removal decisions through, for example, the work of the Facebook Oversight Board, should help. In practice, Jenny Domino warns, the Board risks doubling down on the Western-centric assumptions already implicit in de-platforming decisions.

How to do better

SMCs face a Herculean challenge in trying to keep their platforms safe for users, and, with respect to speech that incites violence, it is inevitable that even with optimal policies in place they will continue to make mistakes. But the status quo is far from optimal. So it is worth focusing on areas for improvement. The menu is large. At the broadest level, the push for anti-trust regulation is growing. In the meantime, with respect to incitement specifically, companies need to expand the toolkit they use for responding to online hate.

Molly Land and I have discussed a prevention by design approach that requires SMCs to identify the speech conditions that are precursors to actual incitement, then use that information to guide the introduction of what engineers call friction onto their platforms. Examples include the limitations on forwarding or re-posting content, and the deployment of fact-checking labels. The major U.S. SMCs have increased their willingness and ability to build friction on their platforms to counter misinformation on COVID-19. Now they need to increase their ability to identify dangerous speech in the multitude of locales in which they operate and be willing to introduce the same types of friction to counter it.

Beyond prevention by design, and the many other concrete suggestions out there for reducing incitement, a more fundamental shift is in order. The attack on the U.S. Capitol provided a local example of the ultimate limits to having built platforms around the model of a functional liberal democracy, where State actors would seek to prevent, not instigate, violence against their own people, and where online incitement would be kept in check by an offline rule of law. Lessons from the Global South over the course of the past decade foretold the violence we saw in Washington, D.C. last week.

Image: Myanmar people gather for refreshment at a teashop in Yangon on August 31, 2018 many hangout to chat and browse facebook with their mobile phone. Photo by SAI AUNG MAIN/AFP via Getty Images