Countless Rohingya refugees have tried to record the ethnic cleansing of their communities by turning to Facebook, the social network that promised to give a voice to the voiceless. Rather than finding solidarity, they face censorship, with Facebook deleting their stories and blocking their accounts. The Rohingya are a majority-Muslim ethnic group primarily living in Buddhist-majority Myanmar. A recent wave of government-sponsored violence has driven an estimated 500,000 Rohingya from Myanmar since August.

Facebook’s mistake shows the danger of censorship and raises the question of which group might be targeted next. When vendors and websites claim to have the tools to target unwanted speech, we need to ask what biases they are bringing to the task.

Many of the organizations proposing censorship technologies have a singular focus: Muslim extremism, ignoring the threat posed by white supremacists. If we let the platforms we all depend-on deploy tools that selectively censor one ideology, all users will suffer.

For years, a growing chorus of advocates called for internet platforms like Google, Twitter, and Facebook to do more to restrict extremist content. Using technology to fight hi-tech radicalization sounds like a good idea in theory, but new technologies usually have their problems, and we must expect that censorship software will misidentify harmless content as violent or extremist.

We’ve seen this problem before. A decade ago, civil society groups objected that library and school content filters would not just block obscene content, but would also censor valuable information and literature about health, sexuality, women’s rights, and LGBTQI issues. The risks of this “digital redlining” are spreading to even impact the platforms we use at home. Part of the problem is that censorship software is uniquely challenging to make.

When only one community or viewpoint is targeted, the errors will go unnoticed by the general public, and most users won’t understand the number of harmless even positive posts being blocked. With most customers blind to the true cost of this censorship, the service provider is far less likely to respond to complaints, no matter how valid. The situation is even worse, since the people designing these programs will inject their own views, skewing the decision about what is “dangerous,” and the tendency of censors to silence already marginalized voices, like those of the Rohingya. 

These technologies try to answer a simple question: What is dangerous content? This isn’t something a dispassionate algorithm can tell us, it’s an inherently subjective question with inherently subjective answers. Take for example, the Counter Extremism Project (“CEP”), a leading censorship advocate that has worked for months to have the major tech firms adopt its platform, publishing op-eds and speaking at conferences. Its eGLYPH platform would automatically flag and remove offensive content.

The danger of eGLYPH is that it appears to be almost exclusively directed at Muslim extremists, ignoring white supremacists.  At a time when the president draws moral equivalence between neo-Nazis and the social justice advocates who oppose them as he did after Charlottesville, we need to fight all forces of hate. When vendors and activists claim to have the tools to target unwanted speech, we need to ask what biases they are bringing to the task.

Facebook’s secrecy only makes the problem worse. We don’t know if Facebook is using eGLYPH, and if it is, we have no idea if it provides oversight to make sure CEP isn’t driving a far-right agenda. As Facebook and other social media companies become indispensable, they pose the sort of First Amendment concerns that once only applied to the government. People, like the Rohingya, who find themselves completely cut off from social media and search engines, are effectively muted.

There are solutions. Censorship systems must be simple, open, and even-handed. Facebook must use censorship algorithms that are easy for their users to understand. It shouldn’t take a Ph.D. to know why your post was taken down. The censorship standards must be shared transparently with users, so they know if Facebook goes too far. Censorship must be even-handedly applied to users regardless of race, religion, nationality, or sexual orientation. Any tool that Facebook employs to fight violent content or radicalization must be applied to all violent ideologies, not just a single group.

The cost won’t always be as clear as it is when we silence the Rohingya, but it’s clear that they won’t be the last group to be censored just when they most need the world to listen.

At the time of writing, Albert Cahn was Legal Director of the New York Chapter of the Council on American-Islamic Relations.

Image: Getty/Paula Bronstein