Four of the Facebook Oversight Board’s 13 decisions so far have taken aim at the platform’s Dangerous Individuals and Organizations Community Standard (DIO Standard). The policy has long been criticized by civil society for being opaque, overbroad, and targeting political speech by Muslim users – such as posts by activists in Kashmir and Palestine, and commentary on the U.S. drone strike on Iranian official Qassem Soleimani. The company has responded with a series of clarifications and policy revisions, but the rules require a fundamental rethink and far more transparency about enforcement, including governmental pressure.

Targeting Terrorist Content

In 2015, as U.S. and European governments became concerned about ISIS’s facility with social media and its ability to attract Muslims from the United States and Europe to its cause, the Obama administration and its European counterparts began pressing social media platforms to take action. While initially arguing that there was no “magic algorithm” for identifying “terrorist content,” the major companies quickly came around, and by 2016 Facebook and Twitter were highlighting how many ISIS accounts and posts they had removed. That year, Facebook, Microsoft, and Twitter also announced that they would create a shared database of the digital fingerprints of terrorist videos and images, which evolved into the Global Internet Forum to Counter Terrorism.

Often these actions take place under Facebook’s DIO Standard, which prohibits “representation” of and “praise” and “support” for “dangerous individuals and organizations.” The board’s criticisms of the standard have focused on lack of clarity about the meaning of these terms and it has called on the company to identify the groups and individuals it considers to be “dangerous.”

The board grappled with the DIO Standard in one of its first decisions, an appeal of the company’s removal of a post that quoted Joseph Goebbels. While the user did not provide any commentary on the quote, comments on the post suggested it “sought to compare the presidency of Donald Trump to the Nazi regime.” The board sharply criticized the DIO Standard for failing to define key terms, particularly what constituted “praise” and “support.” Among other things, Facebook had not made it clear to users that they must actively disavow a quote attributed to a dangerous individual in any post nor had it made public the individuals or organizations it had deemed “dangerous.”

Again when it considered Facebook’s deletion of a video that praised Indian farmers protesting against India’s ruling political party, the Bharatiya Janata Party (the BJP), and the Hindu nationalist organization Rashtriya Swayamsevak Sangh (RSS), the board reiterated its concern that key terms (such as praise and support) in the DIO Standard remained undefined, and that the standard was not available to users in Punjabi. The board also raised questions about whether Indian officials had leaned on Facebook to remove “content around the farmer’s protests, content critical of the government over its treatment of farmers, or content concerning the protests.” But it was stymied in addressing the issue because Facebook refused to provide information about its communications with Indian officials.

Only in its decision on the suspension of Donald Trump’s account did the board uphold a Facebook decision under the DIO Standard. The board determined that the suspension of Trump’s account in wake of the Jan. 6 attack on the Capitol by his supporters was consistent with the DIO Standard because Facebook had designated the attack a “violent event” and Trump’s comments – such as “We love you. You’re very special.” – amounted to praise of the event and its perpetrators. Ongoing violence, risk of further violence, the size of Trump’s audience, and his influence as head of state justified the imposition of a suspension. As we detailed in a previous Just Security piece, the board declined to comment on the DIO Standard’s lack of criteria for violent events that fall within its scope.

Facebook’s Response

On June 23, Facebook updated its DIO Standard. The new standard creates three tiers of dangerous organizations, levels that are tied primarily to the degree of harm the company attributes to each, with violence as the touchstone and greater restrictions placed on groups that engage in actual offline violence.

Tier 1 covers groups that engage in terrorism, organized hate, large-scale criminal activity, mass and multiple murderers” as well as “violating violent events.” These are groups that impose “serious offline harms” by organizing or advocating for violence against civilians, repeatedly dehumanizing or advocating for harm against people based on protected characteristics, or engaging in systematic criminal operations.”

Tier 2, “Violent Non-State Actors,” consists of “[e]ntities that engage in violence against state or military actors but do not generally target civilians.” Tier 3 consists of groups that routinely violate Facebook’s Hate Speech or DIO Standards on or off the platform, “but have not necessarily engaged in violence to date or advocated for violence against others based on their protected characteristics.” Examples include “Militarized Social Movements, Violence-Inducing Conspiracy Networks, and Hate Banned Entities,” which the DIO Standard now defines.

Facebook treats speech from and about Tier 1 groups most severely, removing praise, support, and representation of the groups. This would cover “speaking positively” about them, “legitimizing” their cause, or “aligning oneself ideologically.” This seems a continuation of the company’s current policy under which, for example, a post about al-Qaeda that argues that the group’s objective of removing foreign troops from Saudi Arabia was justified would be forbidden. Facebook also tries to create an exception for posts that “report on, condemn, or neutrally discuss” Tier 1 groups and their activities. As is already the case, news reports of al-Qaeda’s goals are covered by this, but it is less clear how the neutrality of an individual user’s comments would be evaluated. Until now, as the Goebbels case indicated, Facebook required a user to disavow the group to protect content from removal. It is not clear whether this is still required under the neutral discussion exception.

For Tier 2 groups, Facebook removes support for the groups and praise of any violent acts, but not praise of their non-violent actions. For example, social programs or human rights issues supported by a violent non-state actor could be praised, while its violent clashes with government officials or advocacy of violent overthrow could not. For Tier 3, Facebook removes representation only, permitting praise and support. Thus, it seems that QAnon cannot have a Facebook page or event, but users can praise it and call on their friends to support the movement with no fear of sanction.

Stuck in the Middle?

Despite the extensive changes to the DIO Standard, the board signaled in its most recent decision published July 10 – in unusually strong language – that the changes made to date insufficiently address the board’s recurring concerns.

The board determined that Facebook wrongly removed a post encouraging discussion of the solitary confinement of a leader of the Kurdistan Workers’ Party (PKK), a group that has used violence in support of its goal of Kurdish secession from Turkey. The board criticized Facebook for purportedly misplacing internal guidance that was supposed to allow content discussing confinement conditions of individuals on the DIO list. It reiterated its criticism that, without publication of this and any similar exceptions, the terms “praise” and “support” remain difficult for users to understand. In its policy recommendations, the board showed a new willingness to dictate policy substance to Facebook, articulating specific categories of speech, defined in detail, that should be protected from removal under the DIO Standard: discussion of rights protected by United Nations human rights conventions, discussions on allegations of human rights violations, and calls for accountability for human rights violations and abuses.

The board also expressed continuing concern that governments may be able to use the DIO Standard to suppress legitimate user content that criticizes government actions. It noted that while Facebook reports the number of requests from government officials to take down content that violates local law, it does not report on requests by government officials to remove content for purported violations of Facebook’s Community Standards. The board recommended that this information be provided to users whose content is removed at a government’s request, as well as made public in aggregate numbers in the company’s transparency reports.

This aspect of the board’s recommendation highlighted a significant gap in Facebook’s transparency reports. Although government requests to take down content are evaluated first under Facebook’s own standards and second – if there is no policy violation – under local law, Facebook only reports on removals that occur at the second level. This hair-splitting exercise potentially obscures a tremendous volume of government requests to remove content from public view.

What’s Next?

It is obvious that Facebook needs to do more to respond to this series of cases.

First, terms like “praise” and “support” are overbroad and likely suppress significant political speech. A narrower rule against praising violence (like Twitter’s) would be simpler to administer and easier for users to understand, and give more scope to political speech in keeping with the company’s stated commitment to “voice.” By focusing on the content of posts, considered in appropriate context, rather than groups or individuals they reference, Facebook may also be able to avoid reliance on (and disclosure of) a list of banned groups and individuals.

Second, the company needs to do better at ensuring that its rules and processes are understood by users and consistently applied. In the Trump case, Facebook’s initial contention that the former president had not – prior to Jan. 6, 2021 – violated any of its community standards was widely considered as disingenuous. Its subsequent discovery that Trump had in fact violated its rule against harassment by fat-shaming an attendee at one of his rallies hardly helped matters. In the most recent case, Facebook claimed that its internal guidance protecting discussions of human rights violations of groups and individuals covered by its DIO Standard was mysteriously lost. These incidents hardly inspire confidence that the company knows what it is doing when it removes posts.

Finally, Facebook must come clean about removals that are initiated by governments but carried out under its Community Standards. By refusing to provide information about such interactions with governments, the company has thus far effectively hidden the scope of government influence on its enforcement of content moderation. This significant loophole, coupled with Facebook’s repeated refusal to answer the board’s questions about government requests, enables governments to exploit Facebook’s Community Standards to quash public dissent.

As the board and civil society groups (like the Brennan Center where we both work) continue to push Facebook for more information and better rules and enforcement, it is worth noting that the company provides more information about removals than its peer platforms. They all need to do better.

IMAGE: A list of “public complaints” against Facebook policies, including the social media giant’s political stances, data security lapses, politicization, privacy violations and misinformation, is taped to the outside of their office building during a protest led by the organization Public Citizen in Washington, DC, May 25, 2021. (Photo by SAUL LOEB/AFP via Getty Images)