Is censorship of access to websites an efficient solution to compel content platforms to curb hate speech? If you’d asked me the question before July 29, 2022, my answer as an ardent defender of unfettered access to the internet would have been: Obviously, no!

But after Kenya’s National Cohesion and Integration Commission’s (NCIC) threat to ban Facebook, concern is allowed. The Commission, after reading Global Witness’ report on hate-filled ads that were accepted by Facebook, and based on its own assessment of the situation, decided to compel the parent company to do more, or it would be banned from the country during the Aug. 9, 2022 presidential election. The election is poised to be contested, in a country where ethnic violence has occurred in the heated aftermath of election results, It is understandable that the NCIC – a government agency created in 2008 to prevent ethnic violence – takes the matter of online spread of hate speech very seriously.

While inequities in content moderation investments have been denounced around the world, including by my organization Internet Sans Frontieres, there had always been a line not to cross: access to information and greater freedom of expression will always be a better solution than banning access to content platforms.

The outcry caused by the Nigerian government’s decision to ban Twitter for over six months in 2021, for the same reasons, failure to sufficiently curb hate speech and fake news, was widespread; and rightly so. The Economic Community of West African States (ECOWAS) Court, in a recent decision, agreed with the arguments presented by organizations, and Twitter, that banning the platform was a disproportionate violation of the right to freedom of expression. Same goes for Ethiopia where the Government’s anti-fake news law, inspired by a similar text in Germany, was criticized for its over-reliance on disproportionate and unnecessary censorship. In 2020, a Wikipedia ban by the Turkish government was lifted after a ruling by the country’s Constitutional Court.

In all of these instances, there was a consensus that free expression should prevail, and that government censorship, for any reason, was always a worrying step. Things should not be different this time because the protagonist is Meta (despite similar criticism regarding other content platforms). The NCIC’s threat against the company should concern any defender of internet freedoms, and should be used as an opportunity to discuss rights-based and efficient solutions to content moderation challenges. The Kenyan government fortunately seems to agree with this, and committed not to shut down the internet, and hopefully including Facebook, Instagram, WhatsApp, and all other content platforms during the elections.

The disappointment caused by Meta’s numerous content moderation mishandlings is understandable. Defenders of a global and open internet should continue to warn against the false choice between safety and content platform banning, as it is advocated by countries like Russia and China, two of the most restrictive regimes when it comes to internet governance.

There are other ways to deal with the bad — i.e., hate speech, the spread of mis- and disinformation — without sacrificing the good of the internet, including access to information, and increased freedom of expression.

It becomes urgent to demonstrate that it is possible to propose a content moderation that doesn’t renounce fundamental rule of law and democratic principles. The European Union’s Digital Services Act leads the way as courageous and robust legislation. The White House’s Declaration for the Future of the Internet is an important reminder of the values that have enabled the internet to benefit societies around the world. The time is now to reconcile these propositions, and have States, companies, and civil society actors agree on the principles and values of a democratic agenda for online content moderation.

The conclusion that internet bans are not the answer doesn’t absolve big tech companies: in October 2020, I argued that their lack of responsibility in markets with apparently less regulatory risk, but where harmful consequences of content could be tenfold, would create backlash, and they would gamble their life-saving access to markets of potentially billions of yet to be connected populations. And more importantly, they would endanger billions of citizens’ ability to access tools, as imperfect as they are, that remain the main access to freedoms and democratic thought in closed societies.

Social media platforms should refrain from using a narrow lens when prioritizing content moderation needs of their borderless products and services. A wider perspective could benefit them, by opening their eyes to early warning signals of future content threats for countries and markets where regulatory risks are allegedly higher. This requires real financial and human resources and a commitment to work with civil society organizations and governments to find context-specific solutions.

Most importantly, we, internet users, freedom and open internet advocates should protect the foundational idea that website banning is never a solution to content moderation challenges.

IMAGE: Ian James Mwai (R), 23, browses social media platforms on his mobile phone with a member of his outfit of social media influencers at an office in Thika town, central Kenya on April 26, 2022. He was in the vanguard of the growing ranks of influencers feverishly punching keyboards and hoping to tilt the outcome of the country’s high-stakes elections, being conducted today, Aug. 9. The rising dominance of apps like Twitter and Facebook has opened a new front in Kenyan politics, with candidates desperate to draw the attention of the country’s 12 million social media users. (Photo by TONY KARUMBA/AFP via Getty Images)