While the American public became more aware of Chinese cyber influence campaigns during the 2020 COVID-19 outbreak, they did not start there – and they will not end there, either. As the world’s attention returns to the origins of the global pandemic and recommits to its containment, the United States must prepare for inevitable shifts in Chinese methods and goals in its cyber influence activities – likely beyond what Western countries have previously experienced in dealing with China. China’s attempts to shape the global narrative around the origins of the pandemic were, by most metrics, a failure. But based on experience with other recent Chinese cyber influence campaigns, this failure will likely trigger a re-assessment and recalibration of overseas influence tactics in the coming months.

The United States – and the rest of the global community – should prepare for this shift by watching a context where it has happened before: Chinese cyber operations in Taiwan surrounding its 2020 elections. An analysis of that context can help us understand the extent of China’s capabilities and how influence trends change before and after major influence campaigns.

Learning from Chinese Influence Operations in the Taiwan 2020 Elections

In late summer 2018, the “Han Wave” swept through social media platforms in Kaohsiung, Taiwan, bringing with it a startling reminder of the volatility of Taiwan’s democratic media environment. Han Kuo-yu, an outsider Kuomintang (KMT) candidate and previously unknown businessman, beat the electoral odds in a city once known to be a Democratic Progressive Party (DPP) stronghold through the power of a populist-leaning social media campaign. The Chinese government favors the KMT, which promotes a narrative of shared Chinese identity across the Strait, and sees Taiwan’s ruling DPP as a threat to the “One China” principle. Over the course of four months, from late August through January, Han’s polling numbers jumped from 25% to a winning 53% as an unexpected outpouring of support spread from Kaohsiung across the island of Taiwan. The source of much of the movement’s viral content was a single Facebook group: “Han Kuo-yu Fans for Victory! Holding up a Blue Sky!” (韓國瑜粉絲後援團 必勝!撐起一片藍天).

Han’s story could have been categorized as an isolated example of the power of grassroots politics in overcoming electoral odds and a testament to the digitization of Taiwanese society. However, this narrative fails to recognize the role Chinese propaganda played in the election, in the form of fake news stories, memes, and loaded rhetoric on social media platforms over the course of late 2018.

In short, Han Kuo-Yu’s election was not a product of organic grassroots activism, but was fueled by the intervention of Chinese actors. “Han Kuo-yu Fans for Victory! Holding up a Blue Sky!” was created in April 2018 by three Chinese state-linked actors using fake profiles, who, over the course of the next six months, shared thousands of posts with the more than 88,000 members of the page. The page became a key source for Han-related propaganda and fake news, which was further amplified through LINE (one of Taiwan’s most popular social media platforms). Han Kuo-Yu became the KMT’s presidential candidate in 2020. Despite losing to incumbent Tsai Ing-Wen, his early success in the campaign is largely attributable to Chinese cyber influence operations.

Through data-driven examinations of Beijing’s interventions in Taiwan’s elections, the United States can identify how China pivots its tactics after elections and uses elections to benchmark the success of interference methods. Prior to the 2020 election cycle, Chinese cyber activities related to U.S. elections have been limited to coordinated pro-China propaganda, largely on social media (data hacks have targeted private citizens and corporations rather than political systems). While American law enforcement officials have claimed that China has attempted to interfere in American elections to an extent, research prior to the 2020 elections reflects that English-language interference has largely been issue-specific, uncoordinated, and easily identifiable. In contrast, some coordinated efforts have been made in closed, Chinese-language social media groups: researchers identified a 2020 push on WeChat to intimidate Chinese-Americans in an attempt to prevent them from voting. It seems possible that China used a variety of small-scale interventions in Taiwan to experiment with methods of interference in what it perceives as a low-risk environment. China can then take the lessons learned from these experiments to refine its intervention tactics in other electoral systems.

It is important to note that Taiwan’s relationship with China is unique, as are China’s goals within Taiwan, and thus, China’s measures within Taiwan may differ from its interference in U.S. systems. China’s interference in Taiwan may be more explicit than it would be in other electoral systems because interference in Taiwan does not undermine China’s stance that “China doesn’t interfere in other nations’ domestic affairs and is resolutely opposed to hacking,” (emphasis added), given China frames Taiwan politics as a domestic issue.

Nonetheless, Taiwan’s election illustrated the same phenomenon as the failed COVID-19 misinformation campaign: China’s influence operations are not achieving their desired results, at least electorally. China may have interpreted Tsai’s win last January and the international rejection of attempts to cast blame for the pandemic on the United States as proof that current “softer” influence campaigns (such as positive propaganda and “astroturfing,” or manufacturing a simulated grassroots movement) are insufficient, prompting more direct tactics moving forward. In Taiwan, China “sharpened” its influence efforts by leaning on overt military threats, increasing aerial incursions into Taiwan’s airspace to the highest rate since the Taiwan Strait Missile Crisis.

China’s failure to secure the desired electoral outcome in Taiwan seems to have had negligible short-term effects on cyber operations in the United States during the 2020 presidential campaigns, as China’s electoral interference remained largely unsophisticated and small-scale. The smaller scope of China’s 2020 election interference could indicate that a more direct method of intrusion is outside of China’s capabilities, but might also suggest that China has pivoted its focus beyond “soft” influence on social media, given the tactic’s failure in Taiwan. In the long term, China’s tactics could become “sharper” (military based, denial of service attacks, and more hostile intervention measures). The Taiwan example underscores the likelihood of this shift: after Tsai Ing-Wen’s re-election, China scaled up military threats and offensive rhetoric throughout the summer and autumn.

The first priority in combatting Chinese cyber interference operations is identifying and quantifying instances of interference. Gathering accurate data on such operations is historically challenging, and Chinese influence operations are both relatively new and largely understudied. China is habitually tampering in Taiwan’s political system, but publicly available, broad-scale analyses of Chinese digital propaganda in Taiwanese media seem lacking. Available studies rely on anecdotal evidence and extrapolation, providing few arguments for their claims about the scale or manner of information campaigns. This highlights the need for gathering representative data points of Chinese interference – particularly instances where China exercises a broad range of tools, with a high degree of investment, as such contexts reflect the fullest extent of China’s available influence toolkit.

A Disease Model Lens for Propaganda Operations

This article proposes a new strategy for viewing cyber information operations, primarily propaganda: using the lens of a disease model. This approach is a direct response to the development of a Chinese model of “information warfare” (IW) based on an epidemiological model of disease transmission. The model suggests that in order to design effective IW, attackers should attempt to increase the “virulence” and “transmission rate” of the propaganda. Rates of “transmission” can be understood as the rate at which individuals experience psychological pressure by IW tactics.

Using a conception of propaganda as an epidemic allows for the use of tactics developed for disease control as a response to IW, translating a long-standing, effective public health framework into the IW sphere. Beyond the specific recommendations presented here, which focus on “diagnosing cases” of IW and tracking “strains” of misinformation, this model can be extended to provide new insights into ways we can use existing public health systems to address propaganda or misinformation campaigns.

The U.S. government has resources available to support efforts to track and combat misinformation abroad, in cooperation with civic organizations and partner governments. In particular, the United States has developed increasingly sophisticated public health tools that could be reconfigured to address the growing misinformation epidemic. Committing public support to these efforts takes a diplomatic, constructive stance to an issue of international importance, representing the U.S. commitment to combating hostile influence operations.

The first obstacle to a data-driven exploration of Chinese cyber influence operations is gathering data – misinformation is hard to identify, especially on platforms like Taiwan’s LINE (a messaging app), WeChat, or WhatsApp, since messages are private. Private discourse is not subject to the standard of fact checking and misinformation algorithms that public platforms like Facebook have implemented and refined since 2016 (although they still have improvements to make). The automated warning systems leveraged to reduce misinformation on platforms like Twitter and Facebook must be replaced by other mechanisms for sites like LINE.

To address the lack of data on Chinese misinformation, particularly in closed messaging groups, government organizations like the U.S. State Department’s American Institute in Taiwan or Global Engagement Center could join with civic organizations to sponsor the development of a crowd-sourcing tool to report “cases” of misinformation. This would harness civic engagement and technical expertise to build products like crowd-sourcing tools that are cost-effective for the government and rely on public sector expertise. This type of tool can collect instances of misinformation on a broader scale than anecdotal newspaper reporting, building a more representative dataset, while encouraging civic engagement on misinformation.

A second challenge is identifying trends in the data in order to target sources of misinformation. Tracking a case of misinformation back to its origins can expose the networks of information sharing, the commonly used platforms leveraged to seed stories, and other cases that may have stemmed from the stories of origin. For example, China is known to use “content farms” to seed stories into more mainstream media. Tracking a story from a social media account, to a mainstream media source, to a content farm may allow for the identification of sites that should be flagged as producers of misinformation, as well as media sources that frequently fuel the “chain of infection” by picking up fake news stories early, and passing them on to others. Identifying these key links in the “chain” and publicizing their behavior may promote higher journalistic standards and cause other outlets to be skeptical of information gleaned from those sources.

Once these initial problems are addressed, the next step is to create a public, open source “case tracker” to track recurrences of the same false story or message in the database in an effort to map, trace, and eventually, contain the spread of individual propaganda “strains.” The final product should map instances of prominent false stories through news outlets, social media platforms, and websites to find the original propagator of the misinformation. This could look like an amalgamation of Twitter’s information manipulation project (which exposes “inauthentic influence campaigns”) and WeChatScope, a tracker of censored WeChat posts (albeit manual rather than automatic, given the limitations of LINE). And it would mimic the success of these two projects by harnessing crowd-sourcing, rather than scraping and mining algorithms. A critical aim of the case tracker would be to release open-source data on the “cases” and “strains” of propaganda, and publish user-generated models of the data – in essence, crowd-sourcing data analysis.

Given the massive global challenges facing the Biden administration, cyber influence operation trends abroad may seem like a lower priority. However, the Taiwan 2020 elections demonstrate that these operations will not stop after elections end – and China may pivot its tactics to account for failures or successes. The United States should support efforts like the one proposed here to combat cyber influence operations, and invest in countering disinformation before China’s most refined tactics are directed at the U.S. media environment.

Editor’s Note: An earlier version of this essay was among the winning entries in New America’s Reshaping U.S. Security Policy for the COVID Era essay competition. 

Image: Getty