Is it possible to eradicate terrorism and violent extremism from the internet? To prevent videos and livestreams of attacks from going viral and maybe even prevent them from being shared or uploaded in the first place? The governments and tech companies involved in the Christchurch Call are working with other public-private partnerships to develop the technical capacity and coordinated approach required to implement this ambitious agenda.
Heads of state and officials from the tech industry gathered for a virtual summit last weekend to mark the second anniversary of the Christchurch Call. New Zealand Prime Minister Jacinda Ardern launched the initiative, along with French President Emmanuel Macron, in 2019, after a far-right extremist attacked two mosques in Christchurch, New Zealand, killing 51 people. The killer had posted his manifesto online, promoted his intended attack in online message boards, and livestreamed the assault on Facebook for 17 minutes.
Despite efforts by the major platforms to stamp out the video and prevent it from being uploaded and shared, it quickly propagated online and illustrated yet again that the internet never forgets. Subsequent terrorist attacks around the world reportedly were inspired by that one.
The Christchurch Call now includes 55 governments and 10 of the world’s leading internet providers, including Facebook, Google, Amazon, and Microsoft, which have voluntarily committed to the action plan. The United States, which did not join the pledge during the Trump administration, citing free-speech concerns, reversed course and joined the Call this year, sending Secretary of State Anthony Blinken to represent the Biden administration.
“Countering violent extremism — in particular racially or ethnically motivated violent extremism — is one of our highest counterterrorism priorities,” said Blinken. “We have to do everything we can to stop terrorists and violent extremists from recruiting and radicalizing people online.”
The action plan includes pledges from the participating governments and tech companies to eradicate terrorist- and violent extremist content (TVEC) online.
“We need to understand how algorithms, at-risk internet users, and extreme networks interact on the path to radicalization, so we can find ways to intervene positively,” said Ardern in her remarks. “We need to update and improve our crisis response capabilities, so that the online impacts of a real-world event don’t exacerbate the harms to our communities.”
Taking Stock
The summit aimed to take stock of what governments and companies had achieved over two years and agree on a common set of priorities going forward. These include expanding the geographic reach and diversity of participants, especially increasing the membership of tech companies to include a broader diversity of platforms from a size and geographic perspective. They also will seek to improve crisis response and the rapid coordination needed to respond to the online dimension of attacks, while gaining a better understanding of the role that algorithms play in amplification and the process of radicalization. And they pledge to improve the transparency of government and industry efforts to counter TVEC online.
TVEC is an acronym that will be key for anyone interested in content moderation and internet governance, as the idea behind the terminology drives coordination between government and industry. The idea of eliminating TVEC raises questions about how to respect human rights such as freedom of expression and association online and maintain a free and open internet, while also preventing the posting and sharing of a specific type of poorly defined, and often contextual content. It’s very difficult for an upload filter or an algorithmic flagging system to distinguish between a video making fun of extremists or reporting on terrorism and one that is glorifying it.
The companies and governments have developed crisis-response protocols that provide a roadmap for tech companies to coordinate with governments in the wake of a terrorist attack. Developed through the Global Internet Forum to Counter Terrorism (GIFCT), an industry body that recently spun off to become an independent non-profit organization, they are seen as a top priority for implementing the Call’s voluntary pledges.
The plan of action coming out of the summit includes improving the Crisis Incident Protocol, which has already been applied to more than 140 incidents since 2019, according to the GIFCT. Member companies are sharing information and situational awareness to understand whether an attack has a particular online dimension. They also share the digital fingerprints — known as hashes — of multimedia and URLs associated with TVEC, while exchanging information about positive interventions and transparency.
Participating governments agreed to improve transparency about their flagging of terrorist and violent extremist content and about removal requests. Tech companies, for their part, committed to improve the quality of reporting while increasing the number and variety of companies that provide such reporting.
What It Means in Practice
The pledge calls for online service providers to “[i]mplement regular and transparent public reporting, in a way that is measurable and supported by clear methodology, on the quantity and nature of terrorist and violent extremist content being detected and removed.” Figuring out what this means in practice has become a focus for several multistakeholder initiatives.
For example, the Organization for Economic Cooperation and Development (OECD) is leading a process to develop a voluntary transparency reporting framework for TVEC online. And the GIFCT is exploring what meaningful transparency means in practice, with the aim of developing best practices and resources to facilitate greater transparency from tech companies and governments about these efforts. (Full disclosure: I am a member of the GIFCT working group on transparency, the OECD TVEC transparency working group, and the Christchurch Call Advisory Network, all of which are volunteer, unpaid positions.)
While transparency is a focus of various efforts underway to eliminate this unwanted content, there is also recognition that the path toward radicalization is about more than just content. And it is there that some of the gaps in this process become apparent. The “user journey” to radicalization and the role that algorithmic recommendation and amplification play in surfacing and circulating extremist content is not about content alone.
Years of research has demonstrated how YouTube’s recommendation algorithm promotes inflammatory content, favors extremism, and has helped radicalize the far right. An algorithm is a set of instructions that includes assumptions about what will result in greater engagement, the outcome desired by revenue-generating commercial platforms. To this end, the working group on algorithms is trying to understand how platforms’ algorithmic design contributes to radicalization or amplification of terrorist or extremist content. But while recognizing that greater understanding of algorithmic outcomes would contribute to a better understanding of these dynamics online, how much companies will be willing to share about the secret sauce of their algorithms remains to be seen.
Furthermore, the algorithm’s interaction with personal and relational data is what makes it powerful and enables the targeting of content to individuals who are likely to engage with it. The focus on algorithms alone obscures the role that data collection and microtargeting play in radicalization and amplification. But while concerns about data privacy with respect to the collection and analysis of data related to TVEC removal have been part of the discussion, concerns about personal data collection and the business models on which many of these platforms are built, are not on the table.
Government Suppression
Perhaps predictably, some of the governments that have signed the pledge and whose representatives spoke at the summit about addressing TVEC online have also sought to shut down domestic dissent and crush independent and critical reporting. They have limited freedom of expression online by censoring and even shutting down the internet, and sometimes by unjustifiably labeling their opponents directly as extremists or terrorists.
The Indian government, a founding supporter of the Call, for example, has shut down the internet in Kashmir and slowed the internet and censored social media content during domestic farmer protests. It regularly equates journalism and dissent with terrorism.
Civil society representatives in the Christchurch Call Advisory Network, of which I am a member, have expressed concern that the same coordination and technological efforts undertaken to combat TVEC could be misused to restrict legitimate expression by governments with little tolerance for dissent. We have called for robust engagement and consultation with civil society.
“It is vital that practical steps are put in place for more active and robust engagement and consultation,” the advisory network said in a statement marking this year’s anniversary. “We want to see governments and tech companies collaborate more effectively with the Network to address critical issues that relate to the Call and are often reflected in national and/or regional legislation.”
Members of the network who spoke at the summit warned that the challenges posed by terrorism and violent extremism — across the ideological spectrum — cannot be addressed as online problems only. Effective solutions will require on- and offline responses that must include robust consultation with civil society and an unwavering commitment to human rights and a free, open, and secure Internet.
The big question is whether the twin imperatives of eradicating TVEC while protecting the internet’s openness and freedom of expression are compatible. An issue that has not been dealt with head on is what error rate is acceptable to different players affected in terms of protected content removed in this quest to prevent the spread of unwanted content. That will determine how to balance these seemingly incompatible goals.