Kenyan youths use mobile phone to record themselves next to a police van

The Global Retreat from Content Moderation Is Endangering Free Expression: Kenya Shows Why

Across the world, major social media platforms are undergoing a profound and troubling shift: a structured retreat from proactive content moderation. Platforms are framing this move as a principled defense of “free speech,” but in practice, it is a deliberate choice to expose users to unprecedented levels of harm, making genuine freedom of expression more fragile.

This post-moderation philosophy creates the perfect conditions for State repression and digital authoritarian drift. Kenya, where Internet Sans Frontieres (I serve as Executive Director of the organization) conducted a seven-month investigation with the KenSafeSpace coalition — created to safeguard a democratic and safe digital space in Kenya — offers a revealing and deeply concerning case study of this global trend.

A Global Context of Harmful Deregulation by Platforms

For years, online experiences differed sharply depending on geography. In wealthier regions such as North America and Europe, content moderation infrastructure, although imperfect, remained more robust than in under-resourced regions like Africa, Latin America, or parts of Asia. This structural inequality, documented notably by whistleblower Frances Haugen, shaped global content governance.

But today, platforms are abandoning proactive moderation altogether. They are replacing it with a narrow, reactive model that intervenes online only when imminent harm can be demonstrated. Platforms are often justifying this shift as a correction to “censorship.” In reality, it strips away the minimal safeguards designed to prevent dangerous content from spiraling into real-world violence.

The consequences are unfolding now, not only in Kenya, but worldwide.

Why Kenya Matters

Kenya is entering a tense  period ahead of the 2027 general elections. The country’s recent history, including post-election violence, shows how quickly inflammatory speech can translate into real-world harm: In this context, the disappearance of proactive content moderation is a direct risk amplifier.

From January to July 2025, Internet Sans Frontieres and the KenSafeSpace coalition observed content circulating on the most widely used platforms in the country: X (formerly known as Twitter), Facebook, and TikTok. We also collected user reports through a dedicated submission form.

Here is what we found:

  • 43 percent of analyzed content showed strong indicators of hate speech, particularly along ethnic and religious lines. In one example, commenting on a video of a religious figure denouncing “infidels,” a user explicitly called for violence against Muslim communities “worse than in 2007” (when Kenya experienced widespread violence after presidential elections). The post was still available online in October 2025 and had already received close to 400,000 views.
  • 26 percent of the content analyzed involved normalized and unmoderated gender-based violence. In one widely viewed publication (over one million views), a user asked X’s AI chatbot Grok to “nudify” a picture of a woman, without the consent of the woman. While Grok did not provide the requested answer, other users responded with “nudified” pictures of the woman. The post was still available on X in October 2025.
  • Close to 30 percent of the posts posed serious risks of electoral disinformation. One recurring narrative falsely alleges that the incumbent government is mobilizing Somali-born citizens, largely Muslim, to manipulate the next election and secure another term for Kenyan President William Ruto.

Our findings reveal a system where harmful content is allowed to spread at scale, precisely because proactive intervention has been abandoned.

From Platform Neglect to State Overreach

Crucially, this erosion of safeguards is unfolding amidst conditions in which governments have intensified their own efforts to restrict freedom of expression.

In Kenya, authorities increasingly invoke the Computer Misuse & Cybercrimes Act to arrest bloggers, activists, or influencers on vague accusations of spreading false information. Yet a recent report by Amnesty International documents how the most potent and unmoderated disinformation often originates from State actors themselves.

Freedom House’s Freedom of the Net 2025 report places Kenya as only “partly free,” citing among other issues the government-ordered internet disruptions during the 2024 anti-government protests, which have been condemned by several civil society organizations, including Internet Sans Frontieres.

This pattern is visible elsewhere. As platforms withdraw from basic moderation duties, States feel licensed to step in with heavy-handed, often illiberal measures, including social media bans, arrests, surveillance, or criminal liability for intermediaries.

Brazil’s Block of X and the New Era of State-Platform Confrontation

The Brazilian government’s decision to block X nationwide for two months in 2024 illustrates the conflicts to come. The ban had been imposed by Brazil’s Supreme Court after X “had refused to ban several profiles deemed by the government to be spreading misinformation about the 2022 Brazilian Presidential election.” Beyond the cost incurred for Elon Musk’s X, and the disrupted access for millions of Brazilians, many digital rights organizations — fierce defenders of open internet principles — struggled to publicly condemn the block. Why? Because X’s own withdrawal from responsible moderation — and even defiance of court orders — had created such a toxic environment that defending the platform became nearly untenable.

Another warning sign of this new confrontation between Big Tech and  sovereign nations was illustrated by the August 2024 arrest of Telegram founder, Pavel Durov, in France. Internet Sans Frontieres publicly condemned the arrest as a dangerous precedent for intermediary liability (without defending the problematic circulation of extremely harmful content on the app). At the same time, we tried to warn platforms that their disinvestment in safety was making such State actions more likely in the future.

There is still time to reverse the course: Tech companies can make the responsible decision of enforcing proactive moderation and establishing moderation safeguards sensitive to local context. Not all countries can afford the luxury of relying on the marketplace of ideas to curb the negative societal effects of harmful speech online. Additionally, authorities in Kenya should refrain from using the fight against hate speech and disinformation as a pretext to invoking the law and silencing dissent online. Civil society organizations should double down on efforts to research and explain the impact of harmful speech in Kenya, before and during the general elections of 2027. Citizens in Kenya should continue their vigilance and demand accountability from authorities and social media companies.

By abandoning proactive content moderation, platforms are accelerating a global slide toward censorship — the very outcome they claim to oppose. In the short term, some companies may benefit from reduced operational costs or increased engagement metrics. But in the long term, history will not look kindly on those who turned away from the responsibility to protect freedom of expression when it mattered most.

Filed Under

, , , , , , , , , , ,
Send A Letter To The Editor

DON'T MISS A THING. Stay up to date with Just Security curated newsletters: