Social media platforms are the new frontline for propaganda against — and the persecution of — human rights defenders. Guatemala is a case study that illustrates the dangers. It also provides lessons for smarter enforcement of social media content policies that could be useful across a range of fragile or post-conflict situations.
In Guatemala, Ricardo Mendez Ruiz — president of an organization called the Foundation Against Terrorism – has urged security forces to deal with opponents “without any mercy,” and has tweeted that he will have “the head” of the president of a United Nations commission investigating corruption and other serious crime in the country, as his “trophy.”
Ruiz is a prominent voice on social media, and the Guatemalan Human Rights Ombudsman in 2013 cited both him and his organization for hate speech against human rights defenders. A May 2019 report by the U.N. Commission Against Impunity and Corruption in Guatemala (CICIG) finds ample evidence that a Twitter account Ruiz has identified as his own has grown rapidly in influence as a result of interconnected bots and fake accounts that amplify the account’s online presence.
In a stable political environment based on the rule of law, such unsavory invective would be merely reprehensible. In Guatemala, still grappling with the legacy of genocide and facing presidential elections on June 16, online attacks not only intimidate opponents but also exacerbate a climate of impunity in which high levels of political violence are tolerated and even condoned. In 2018, more than 26 human rights defenders were reportedly murdered in Guatemala. The majority were affiliated with the same rural and indigenous communities targeted by the Guatemalan military and paramilitary forces during the decades-long armed conflict.
A recent report documents the way online platforms are being used by powerful elites in Guatemala to systematically undermine, isolate, and increase the vulnerability of activists, justice sector personnel, and journalists. The report was published by the American Bar Association’s Center for Human Rights and written by the Human Rights and International Law Clinic at the University of Connecticut under the supervision of the authors of this article.
Harmful Speech Under the Radar
Although it is not possible to identify a one-to-one correlation between any particular online activity and real-world violence, pervasive hate speech in Guatemala and the systemic targeting of defenders by state-aligned actors contributes to a climate in which violence and impunity for violence are endemic. The risks of a downward spiral of political violence are particularly high when there is a constitutional crisis or national elections—and Guatemala is going through both this year.
This harmful speech flies under the radar of most content-moderation rules since it does not rise to the level of a direct threat of violence and it is too ubiquitous to be addressed solely through takedown. Although the tweets indicated above are relatively explicit, much of the harassing and inciting speech against defenders is coded or euphemistic.
The problem, however, is not weak rules, but insufficiently contextualized rules. Speech that is merely repugnant in one setting may be incendiary in another. Labeling a human rights defender as a “terrorist” or “communist” does not violate content-moderation policies but can be extremely dangerous in countries characterized by weak rule of law and a history of political violence like Guatemala.
Instead of just more enforcement, social media needs smarter enforcement. Content-moderation policies must take into account the impact of speech in hazardous environments. Toward this goal, the report makes three primary recommendations.
Three Methods for Smarter Enforcement of Content Policies
First, social media companies should, as a temporary measure, include human rights defenders as a protected category under their harmful content policies in countries where defenders face persecution by the state, or are not protected by the government from violent retaliation.
Second, social media companies should establish additional review procedures to take better account of coded threats that contribute to a climate of violence, but which may not necessarily constitute a direct personal threat. In particular, social media companies should provide heightened scrutiny of content in crisis-ridden or “sensitive” countries; engage local personnel who can decipher coded speech; consider the status of the speaker in evaluating speech; and improve flagging processes to facilitate the gathering of context-specific information.
Social media companies should also provide heightened scrutiny of speech based on a review of the known contextual risk factors of violence, including a history of intergroup conflict, a major national political election in the next 12 months, and significant polarization of political parties along religious, ethnic, or racial lines.
Third, social media companies should utilize endorsed content moderators who are trusted to identify terms of service violations; create and implement online and social media literacy training programs; and create transparent appeals processes for challenging decisions to remove, or refusals to remove, flagged content. Companies should archive removed content, ensure such information is not permanently deleted, and allow access by freedom of information monitors, as well as investigators who are part of an international accountability framework. A transparent review process is the only way to effectively ensure that content-moderation policies are applied in a manner that is narrow and proportionate to the actual risk of harm.
Implemented carefully, these recommendations could help ensure these companies deliver on their promise of providing a platform for open debate without inadvertently enabling incitement against vulnerable populations and the silencing of the voices of those advocating for the basic human rights of all of us.