Last week, the Oversight Board, an independent body created by Meta to answer some of the most difficult questions around freedom of expression on its Facebook, Instagram, and Threads platforms, published a policy paper outlining key lessons for industry on moderating content with respect to elections. With the populations of at least 80 countries around the world set to participate in elections this year, there has never been a more critical time for democracy, human rights, and open and fair societies. In the first three months of 2024 alone, people in Bangladesh, Pakistan, Indonesia, and Taiwan went to the polls. Elections throughout the rest of the year are already underway in India and expected across several other countries and regions, including in South Africa, Mexico, the European Union, the United Kingdom, and the United States.

The Oversight Board, of which I am a part, officially made the protection of elections and civic space one of its seven strategic priorities in 2022, but has addressed election integrity issues since the Board’s first set of cases. Many conflicts around the world stem from electoral controversies. In this historic election year, it is especially important to identify ways in which social media companies can better safeguard the integrity of elections while respecting freedom of expression, in line with their responsibility to respect human rights under the United Nations Guiding Principles on Business and Human Rights.

The policy paper provides detailed guidelines on the Board’s approach in this historic election year, but I will discuss some salient points here.

The Importance of Freedom of Expression to Elections

Article 19 of the International Covenant on Civil and Political Rights (ICCPR) broadly protects freedom of expression. The U.N. Human Rights Committee has affirmed that “free communication of information and ideas about public and political issues between citizens, candidates and elected representatives is essential,” (General Comment No. 34, para. 13).  One of the main challenges the Board often encounters in election-related cases is to ensure that political speech necessary in a public debate is not suppressed, while ensuring that inciting or coordinating violence on the platform is not allowed under the guise of freedom of expression or protest. 

Through several cases, the Board has emphasized the high protection that political speech receives under human rights law because of its importance to public discourse and debate. In the Altered Video of President Biden case, involving a video that had been altered to make it appear as though the U.S. president was inappropriately touching his granddaughter’s chest, the Board emphasized that mere falsehood cannot be the sole basis for restricting freedom of expression under human rights law. The Board urged Meta to specify the harms that its Manipulated Media policy seeks to prevent, and to rethink its default approach to remove manipulated content if not violating any other content policy. 

The Board has also protected political speech provided there is no direct connection to potential offline harm. For example, the Board instructed Meta not to remove news reporting of a politician’s speech in Pakistan’s Parliament, which contained a classical reference that while violent in nature was not literal or likely to lead to actual harm (Reporting on Pakistani Parliament Speech). On another occasion, the majority of the Board determined that a controversial expression of opinion on immigration was not hate speech because it did not contain a direct attack on a group based on a protected characteristic (Politician’s Comments on Demographic Changes). In both cases, the Board decided the content, while potentially offensive to some, constituted protected political speech and should stay up.

Violence and Intimidation by Political Leaders 

At the same time, the adverse human rights impacts and threats to democratic processes posed by political leaders glorifying, inciting, or threatening violence are real. The protection of the right to freedom of expression under Article 19 of the ICCPR is not absolute; the right may be limited under certain circumstances, such as protecting the right to life (Article 6, ICCPR) or the right to vote (Article 25, ICCPR). Article 20 of the ICCPR prohibits advocating for national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence. The Board’s decisions consistently apply the six-factor test under the Rabat Plan of Action in considering whether content should be removed in line with Article 20.

Since its inception, the Board has addressed the issue of post-election violence and whether Meta was right to suspend former U.S. President Donald Trump from its platforms in the wake of the January 6, 2021, U.S. Capitol riots (Former President Trump’s Suspension).  Since then, the Board has also looked at leaders inciting post-election violence in other settings, for example, in the Brazilian General’s Speech case. In both decisions, the Board found that Meta should have acted more quickly against the encouragement or legitimization of violence. The Board recommended that Meta establish a framework for responding to high-risk events as part of their broader election integrity efforts. Meta responded by creating a Crisis Policy Protocol, a policy guiding its response to crises when its regular processes are not sufficient to prevent harms. This tool can be applied to electoral controversies such as procedural disputes and contested outcomes. 

In the Brazil decision, the Board also recommended Meta establish a framework for evaluating and publicly reporting on its election integrity efforts worldwide, including adopting metrics for success, providing relevant data for the company to improve its overall content moderation system. Information drawn from these metrics should help Meta decide how to deploy its resources during elections and draw on local knowledge to address coordinated campaigns aimed at disrupting democratic processes, as well as set up feedback channels and determine effective measures when political violence persists after an election’s formal conclusion. Meta has committed to do this this year. 

Meta’s platforms are used in various political regimes, and in some cases, incitement to violence is not always confined to the immediate run-up or aftermath of elections. In the Cambodian Prime Minister case, the Board required Meta to remove a violating post from the then Prime Minister Hun Sen targeting the political opposition with violence months ahead of scheduled elections. Given Hun Sen’s history of human rights violations and intimidating political opponents, the Board also recommended the suspension of his Facebook page and Instagram account for six months. The Board concluded his threats to the political opposition could not be justified as “newsworthy” content and had a high likelihood of causing physical harm. Meta’s ultimate decision not to suspend Hun Sen’s account sets a potentially dangerous precedent for rulers elsewhere who frequently use Meta’s platforms to threaten and intimidate critics. A number of international human rights groups spoke out after Meta declined the Board’s recommendation.

Risks of Over-Enforcement 

The period before, during, and after elections are crucial for heightened communication and information-sharing among users. One issue that the Board commonly sees during elections is that governments pressure platforms to remove lawful content on the (sometimes pretextual) basis that it violates a platform’s policies. At the Board’s insistence, Meta now informs users when their content is removed due to a government request. Several decisions have also pushed for more transparency around government takedown requests (Öcalan Isolation, UK Drill Music, policy advisory opinion on Removal of COVID-19 Misinformation).

A policy that often leads to over-enforcement is Meta’s Dangerous Organizations and Individuals policy, which prohibits glorification, support and representation of individuals, groups, and events Meta designates as dangerous. While the policy pursues a legitimate aim, including during elections (Greek 2023 Elections Campaign cases), it has in practice all too often led to the arbitrary removal of content posted by users reporting on situations involving those groups, defending human rights, or drawing unobjectionable analogies. In a recent policy advisory opinion, the Board advised the company to end its presumption that the word “shaheed” (loosely translates as “martyr” in one meaning) always denotes praise when referring to designated individuals (Referring to Designated Dangerous Individuals as “Shaheed”). This should ensure more accurate enforcement of what Meta has described as its “most moderated word,” ensuring political expression is better respected. Importantly, the Board also asked Meta to clearly explain to users how Meta’s automated system is used to generate predictions about potential content violations of this policy. 

Another recurring issue undermining civic spaces is the challenge Meta faces in distinguishing between figurative political criticism and credible threats prohibited by the Violence and Incitement policy, especially in non-English-speaking contexts. In the Iran Protest Slogan case, this meant severely hampering a protest movement in a country where a particular slogan (“marg bar Khamenei” translated as “death to Khamenei,” Iran’s Supreme Leader) was commonly used to resist the Ali Khamenei regime. The Board highlighted in the decision that rhetorical political statements, which are not a credible threat, do not violate the policy and do not even require a newsworthiness policy exception. The Board recommended changes to the Violence and Incitement policy so that such speech during protests is not arbitrarily suppressed. 

Disinformation 

Disinformation can undermine confidence in the integrity of elections and fuel polarization. Misleading content can sow distrust in government institutions, civil society and the media. On the other hand, the question of what information is true or false (or misleading) is often a legitimate part of democratic disagreement.  Governments and powerful actors sometimes use the presence of misinformation as a pretext for suppressing uncomfortable truths. This is why the attempt to combat harmful misinformation is complex, exacerbated further by the use of artificial intelligence (AI) to influence politics.

In the Altered Video of President Biden case, the Board found that Meta’s Manipulated Media policy, which governs how AI-generated content is moderated, was riddled with gaps and inconsistencies. It treated content that portrays people saying something they did not say differently to content showing people doing something they did not do. It also treated types of audio and audiovisual media inconsistently.  While the altered video was left up in the case, the Board urged Meta to revisit its policies on manipulated media to ensure content is removed only when necessary to prevent or mitigate specific harms. These harms needed to be better defined. The Board also recommended labeling of AI-generated content as an alternative to removal, except when the content violates other policies. Meta has announced that it is acting to implement the Board’s advice, which will provide people with the context they need to make informed decisions about content. 

The Oversight Board’s policy paper also highlights the role of Meta’s design and policy choices, in particular its newsfeed and recommendation algorithms, in enabling disinformation narratives promoted by networks of influencers to gain traction and spread, sometimes leading to offline violence. In various instances, the Board urged Meta to explore measures to reduce organic and algorithmically driven amplified harmful content (Claimed COVID Cure case; policy advisory opinion on Removal of COVID-19 Misinformation). Such measures should include the means to appeal Meta’s decision when the company demotes their content based on a fact-checker’s rating of “false,” “misleading” or “altered” (Altered Video of President Biden).

Although political ads are outside the Board’s remit, stakeholders have raised their concerns with the Oversight Board claiming political ads violating the Community Standards were nonetheless allowed by Meta (Brazilian General’s Speech). The Board’s recommendation in the Brazil case to create a framework with success metrics for evaluating the effectiveness of the company’s election integrity efforts, was partly a response to this phenomenon. The Board will closely monitor the implementation of this recommendation.

Conclusion: Nine Key Lessons for Industry

Based on the Board’s experiences, the policy paper identifies the following key lessons for those working to preserve electoral integrity on social media platforms. These guidelines are primarily for industry, but they aim to build on other stakeholders’ work to push companies to respect human rights and hold them accountable. 

  • Policies are one part of the story, but enforcement is equally as essential. This demands that social media companies dedicate sufficient resources to moderating content before, during, and after elections. 
  • Companies must set basic global platform standards for elections everywhere. They must ensure they do not neglect the dozens of elections taking place in countries or markets considered less lucrative because this is where the human rights impact of not implementing such standards can be most severe. Platforms that fail to deliver should be held accountable. 
  • Political speech that incites violence cannot go unchecked. Quicker escalation of content to human review and tough sanctions on repeat abusers should be prioritized.  
  • Platforms must guard against the dangers of allowing governments to use disinformation, or vague or unspecified reasons, to suppress critical speech, particularly in election settings and around matters of public interest. 
  • Policies that suppress freedom of expression must specify the real-world harms they are trying to prevent, to ensure they are necessary and proportionate to the harm. 
  • Lies have always been part of election campaigns, but technological advances are making the spread of falsehoods easier, cheaper, and more difficult to detect. Clear standards need to be set for AI-generated content or “deepfakes” and other types of manipulated content, such as “cheap fakes.”  
  • Journalists, civil society groups, and political opposition must be better protected from online abuse as well as over-enforcement by social media companies, including at the behest of governments and other parties.  
  • Transparency is more important than ever when it comes to preserving election integrity. Companies must be open about the steps they take to prevent harm and the errors they make. 
  • Coordinated campaigns aimed at spreading disinformation or inciting violence to undermine democratic processes must be addressed as a priority.
IMAGE:  In this photo illustration, The logos of applications, WhatsApp, Messenger, Instagram and Facebook belonging to the company Meta are displayed on the screen of an iPhone in front of a Facebook logo on February 03, 2022 in Paris, France. (Photo illustration by Chesnot/Getty Images)