In its public-facing quarterly financial reports, Meta, the parent company of Facebook, Instagram, and WhatsApp, labels all countries in Africa, Latin America, and the Middle East as the “Rest of World.” Although one-third of Facebook’s daily active users, 638 million people, live in the “Rest of World,” they do not receive even close to their share of Facebook’s budget for combatting misinformation. Indeed, Facebook allocates 87 percent of its misinformation budget to the United States and Canada alone. For a long time, Meta has not been called out on its neocolonial hubris. But no longer – the “Rest of World” is fighting back. And, in the courts, that fight is starting with a constitutional petition in Kenya.

Litigating in Kenya

Last month, Abrham Meareg and Fisseha Tekle, filed a claim against Meta in the Kenyan High Court. Nairobi is the hub for Facebook content moderation for Eastern and Southern Africa. For many people, Facebook’s content moderation decisions are a matter of life and death. That was the case for Abrham’s father, Professor Meareg Amare. Professor Meareg was a respected chemist at a university in Ethiopia. He wasn’t on Facebook. Yet Facebook’s policies and practices enabled his killing.

On Oct. 9 and 10, 2021, two posts, including photos of him, were published on Facebook. These posts made false accusations against Professor Meareg, identified the small neighborhood in which his family home was located, identified his place of work, and called for his death. His son, who does have a Facebook account, tried desperately to obtain Facebook’s help.

The posts were published in the midst of an ongoing war between the Ethiopian government and the Tigray People Liberation Front (TPFL). There was increased targeting of Tigrayan people. Identifying Professor Meareg as a Tigrayan, and falsely associating him with the TPLF, put his life at imminent risk. Abrham reported the posts to Facebook again and again, but Facebook took no action. Just weeks later, Professor Meareg was murdered outside his family home.

This horrific scenario is not unique. Countless people have lost their lives or had their lives put at risk by Facebook’s newsfeed algorithm. The algorithm, which amplifies posts likely to receive engagement from users, has continually boosted hate, extremism, and violence. One person who has seen this in detail is Fisseha Tekle, who was an Ethiopia researcher for Amnesty International and is now one of its legal advisors. He is an experienced, meticulous, human rights researcher who has written multiple reports about human rights violations in the Ethiopian war. Because of these reports, he has been targeted with hatred and vitriol on Facebook, putting his life at risk and making it very difficult for him to pursue his work.

Article 27 of the Kenyan Constitution provides equality before the law, prohibiting discrimination based on protected characteristics including race, ethnicity, social origin, language and birth. The Constitution expressly applies not only to government, but to all “persons” – Article 20(5) – and “persons” are defined to include corporations – Article 260.  Abrham and Fisseha assert Meta discriminates against Facebook users in Africa on all account of “race, and ethnic and social origin.” Their constitutional petition argues that Meta has repeatedly prioritized profit over the lives of Africans and asks the Court for redress. Their petition demands Facebook adjust its algorithms to stop prioritizing hate and incitement, that the company employ and compensate sufficient content moderators to keep the platform safe, provide a restitution fund for all victims of violence incited on Facebook, and issue an apology to the Meareg family.

If a similar suit had been lodged in a U.S. court, it would most likely fail thanks to Section 230 of the Communications Decency Act of 1996 (CDA), which U.S. courts have interpreted to provide social media platforms like Facebook with wide-ranging immunity from civil suit. Petitioners in an upcoming U.S. Supreme Court case hope to change this interpretation, but either way, no such immunities will protect Meta before the High Court of Kenya, which will make its determination based on Kenyan Constitutional law.

Facebook’s Discriminatory Policies & Practices Extend Globally

Abrham and Fisseha’s constitutional petition highlights the way language disparity is entrenched on the Facebook platform. For Ethiopia, with a population of 117 million, Facebook has hired 25 content moderators, who are able to cover just 3 of the 85 languages spoken in the country. Inciting, hateful, and dangerous content, like the posts against Professor Meareg, pass undetected, remain online, and are boosted by Facebook’s algorithms. The platform itself, while available worldwide, is in fact only fully accessible in English. For example, the Help Center and Community Standards Enforcement Center are often not translated into local languages.

None of this, of course, is news to Meta. Tragically, the killing of Abrham’s father is far from the first death enabled by Facebook’s policies and practices. The company’s deadly configuration is painfully familiar to anyone who has worked with survivors of platform-enabled crimes in Myanmar. There, the military used Facebook to incite genocide against the Rohingya people. Like in Ethiopia, Facebook’s automated content removal systems could not read the local typeface, Facebook failed to properly translate its Community Standards into the local languages, and the company did not employ a single content moderator able to speak Burmese. Rohingya survivors of the resulting crimes are trying to sue Meta in U.S. courts. Their case was recently dismissed on the basis of Section 230 of the CDA, although the court has left the door open for them to re-file their complaint.

Each time such a case is brought to public attention, Meta’s response is the same: we are doing all we can. In relation to Ethiopia, they say: “for more than two years, we’ve invested in safety and security measures… including building our capacity to catch hateful and inflammatory content in the languages that are spoken most widely in the country.” As detailed above, these “investments” are woefully inadequate. Whistleblower Frances Haugen, in her testimony to U.S. senators, explained how Facebook’s algorithms “literally fan[] ethnic violence” in Ethiopia. “Facebook… knows, they have admitted in public, that engagement-based ranking is dangerous without integrity and security systems, but then not rolled out those integrity and security systems to most of the languages in the world… And that’s what is causing things like ethnic violence in Ethiopia.” 

Facebook has been willing to use the security systems Haugen referenced to protect its U.S. users. We know, for example, that on January 6, 2021, in response to the attack on the U.S. Capitol, Facebook implemented its “break the glass” procedure. This procedure involves a series of specific algorithmic changes, deployed in times of crisis, so that inciting, hate-filled and dangerous content is removed, muted, and prevented from further distribution. Facebook documents confirm these steps are effective; they substantially reduce the reach of violent and hate-filled content. Yet, for Ethiopia, Facebook has chosen not to implement this procedure, despite the ongoing violence fuelled by its platform.

U.S. Courts Lagging Behind

If U.S. courts continue to close their doors to claims against U.S. social media companies, then plaintiffs like the Rohingya genocide survivors may well take the lead from Abrham and Fisseha and look outside the U.S. courts for redress. The potential picture emerging is one in which U.S. courts lag behind their foreign counterparts in providing a forum for the litigation of harms enabled by Meta and other social media companies. This outcome, however, can be avoided.

On Feb. 23, the U.S. Supreme Court will hear oral arguments in Gonzalez v. Google. Petitioner Reynaldo Gonzalez sued Google, which owns YouTube, over the killing of his daughter by ISIS in Paris. Gonzalez acknowledges that Section 230 of the CDA protects Google from suit for YouTube videos created by ISIS. He nonetheless argues that Section 230 does not shield Google from liability for the way in which YouTube’s recommendation algorithms pushed ISIS content out to users who were likely susceptible to ISIS propaganda. Gonzalez seeks an outcome that continues to uphold Section 230 immunity for content posted to U.S. social media platforms by third-party users but opens up liability for the type of user-engagement algorithms that have fueled violence in Ethiopia or Myanmar and recruited ISIS members worldwide.

Creating a system in which social media platforms take responsibility for the effects of their algorithmic recommendations seems a worthy goal, but caution is warranted. Algorithms can amplify dangerous content, but they can also be used to demote it. And it would set up a perverse incentive structure if social media platforms could be sued for using their algorithms to minimize the spread of dangerous content. Rather than drawing the liability lines by offering blanket immunity to social media platforms for third-party content but not for its own algorithms, the U.S. Supreme Court could follow the path laid out in an amicus brief by The Cyber Civil Rights Initiative (CCRI). There, Professors Mary Anne Franks and Danielle Citron urge the Court to issue an interpretation of Section 230 that is consistent with both its plain text and legislative intent.

The first part of Section 230(c) states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” CCRI highlights the distinction between a “publisher” and a “distributor” of defamatory material noting that “[d]efamation at common law distinguished between publisher and distributor liability.” While a publisher was strictly liable for carrying defamatory matter, a distributor who only “delivers or transmits defamatory matter published by a third person is subject to liability if, but only if, he knows or has reason to know of its defamatory character.” In other words, if an individual or a business has knowledge that they are facilitating harm caused by a separate, directly liable party, that facilitation may give rise to a secondary civil or criminal liability. Why should this basic principle of secondary liability not be applied to social media platforms?

The second part of Section 230(c) is a Good Samaritan law, written to immunize services from liability for “any action voluntarily taken in good faith” to remove “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable“ content from their site. This means that if a social media platform uses an algorithm, voluntarily and in good faith, to limit the spread of dangerous content, then it deserves protection. As the CCRI brief states, however, “Section 230(c)(2) includes important limits to the immunity it provides.” Crucially, it does not immunize social media companies “that do nothing to address harm or that contribute to or profit from harm.”

This interpretation is consistent with what survivors of platform-enabled crimes want. Abrham and Fisseha’s constitutional petition asks the Kenyan High Court to order Facebook to change its algorithm to demote inciteful, hateful and dangerous content. Had Facebook done this when inciting content against the Rohingya spiked on their platform, the company could have reduced the speed and scale of the killings, or even helped avert the genocide.

“Rest of World” users have not let the existing (and in our view, erroneous) interpretation of Section 230 deter them in their fight for redress. And where U.S. courts have failed them, the Kenyan High Court may soon begin to fill the void. Depending on what the U.S. Supreme Court determines in Gonzalez v. Google, it could be U.S. users who are the ones left unprotected by a legal system when Facebook’s policies and practices enable harm against them.

IMAGE: A man browses social media platforms on his mobile phone at an office in Thika town, central Kenya. (Photo by Tony Karumba/AFP via Getty Images)