A recent Washington Post article about artificial intelligence (AI) briefly caught the publics’ attention. A former engineer working for Google’s Responsible AI organization went public with his belief that the company’s chatbot was sentient. It should be stated bluntly: this AI is not a conscious entity. It is a large language model trained indiscriminately from Internet text that uses statistical patterns to predict the most probable sequence of words. While the tone of the Washington Post piece conjured all the usual Hollywood tropes related to humanity’s fear of sentient technology (e.g., storylines from Ex Machina, Terminator, or 2001: A Space Odyssey), it also inadvertently highlighted an uncomfortable truth: As AI capabilities continue to improve, they will become increasingly effective tools for manipulating and fooling humans. And while we need not fear an imminent cyborg apocalypse, we do need to prepare and strategize for a new era of AI-enabled disinformation.

Disinformation operations — covert efforts to deliberately spread false or misleading information — have historically been a distinctively human endeavor. With the rise of instantaneous digital communications, malign actors have increasingly exploited the machine learning systems embedded in our daily lives to precisely target audiences, shape global public opinion, and sow social discord. Today, disinformation operators are expanding their manipulation toolkits to include new AI techniques. AI-generated synthetic media and convincing AI-enhanced chatbots now offer threat actors a growing array of persuasive, tailored, and difficult-to-detect messaging capabilities. While machine learning techniques can also be used to combat disinformation, they will likely remain insufficient to counterbalance the expanding universe of anonymous digital mercenaries. Unless liberal democracies develop whole-of-society counter-disinformation strategies, AI-enhanced disinformation operations will further exacerbate political polarization, erode citizen trust in societal institutions, and blur the lines between truth and lies.

Mapping and Defining the Modern Disinformation Landscape

First, a few quick definitions are in order. AI is a field of research that seeks to build computing technologies that possess aspects of human perception, reasoning, and decision-making. Machine learning, a subset of AI, involves the use of computing power to execute algorithms that learn from data. Algorithms are the recipe for completing programmed tasks; data help these systems “learn” about the world; and computing power is the engine that enables systems to perform specific tasks quickly and accurately. Over the last decade, significant improvements in machine learning capabilities have been enabled by advances in computer processing power, the rise of Big Data, and the evolution of deep learning “neural networks.” (These networks contain cascades of nodes that loosely mimic the neurons in the human brain, which, in combination, can identify patterns in large datasets and encode complex tasks.) While distinct from human intelligence, AI excels at narrow tasks and has exceeded human capabilities in several fields.

As non-human intelligence has increasingly been integrated into the fabric of human activity, digital disinformation has emerged as a potent means of political warfare. Large technology corporations, driven partly by the competitive market for attention and advertising revenue, extract user data to refine their content-selection algorithms and optimize user engagement. As outlined in the Facebook papers, at the core of these influential social networks are recommendation systems that drive users down rabbit holes of progressively more personalized and novel content. A report from Katerina Sedova and her team at the Center for Security and Emerging Technology illustrates how malign actors exploit this attention-centric digital environment to micro-target and mobilize unwitting Internet users by seeding “the information environment with tailored content and hijacking legitimate online forums” through fake human accounts (sock puppets), automated botnets, groups of coordinated humans, and digital advertising. Disinformation operators seek to exploit human biases, heighten emotions, and induce information overload at the expense of rational decision-making. Researchers from Harvard’s Shorenstein Center argue that talented disinformation operators can “enable discriminatory and inflammatory ideas to enter public discourse” as fact by deliberately highlighting differences and divisions in society. Over time, this phenomenon can widen identity-based fissures between social groups, jam the gears of democratic governance, and, in some cases, catalyze violence.

Digital disinformation techniques are diffusing and evolving at a rapid pace. Authoritarian regimes, particularly Russia and China, increasingly pursue new capabilities to project their disinformation operations more precisely at home and abroad. While foreign disinformation campaigns receive the majority of attention, domestic actors are adopting similar tactics, and operations are increasingly being outsourced to a growing transnational disinformation-for-hire industry. Given its proven success and future potential, well-resourced actors will continue to invest in advanced AI capabilities to augment their current disinformation operations.

Democratized Deepfakes and AI-Enhanced Chatbots

Machine learning techniques can generate highly realistic fake images, audio, and video known as “deepfakes.” Generative Adversarial Networks (GANs) make these synthetic media capabilities possible — a technique that generates new synthetic data that becomes increasingly realistic as it learns. A GAN system pits two networks against one another. One network (the discriminator) is trained on a real dataset of interest and then detects whether new data is real or fake. The opposing network (the generator) produces novel data to fool the discriminator. As a result of this iterative competition, the generator improves at creating synthetic content that can be weaponized to mislead, deceive, or influence audiences. This technique has been used to create extraordinarily realistic artificial faces for legions of bot accounts, produce convincing audio for extortion or blackmail, and generate fake content that, when timed strategically, can destabilize governance and geopolitics.

As synthetic media capabilities become more accessible and user-friendly, experts predict that deepfakes will benefit those already in positions of power and influence and pose the most significant risks to communities antagonistic to traditional power structures. In a world replete with manipulated media, powerful individuals and institutions can conveniently dismiss inconvenient facts. This dynamic may perpetuate what Robert Chesney and Danielle Citron call a “liar’s dividend” in which bad actors caught in genuine recordings of misbehavior can dismiss the truth as AI forgery. Beyond the individual harms and misdeeds enabled by this dynamic, this trend will likely accelerate growing cynicism about the possibility and value of distinguishing between fact and fiction. According to a recent report from the United Nations Institute for Disarmament Research, the ability to “portray someone doing something they never did or saying something they never said” through deepfakes could “challenge and influence our perceptions of reality.”

While visual deepfakes have garnered the attention of policymakers, deepfake text may prove even more vexing. Breakthroughs in natural language processing and generation — machine learning algorithms that recognize, predict, and produce languages — have given rise to sophisticated large language models capable of reading, writing, and interpreting text. As Karen Hao explains in MIT’s Technology Review: “By ingesting millions of web-based sentences, paragraphs, and dialogue, these models learn the statistical patterns that govern how each element should be sensibly ordered.” But along with grammatical rules, these models excel at mimicking online discourse and are thus prone to parroting humanity’s most insidious biases.

Capable of writing persuasive and seemingly authentic content that conforms to a specific cultural milieu, large language models can be used to increase the scale and scope of disinformation operations. Early research has found that one model can write articles indistinguishable from those written by journalists (particularly as the partisanship of the content increased), emulate the style of extremist writing, produce racist manifestos from multiple viewpoints, seed new conspiracy narratives, and draft posts that exploit political wedges. Studies have also shown that threat actors can use large language models to streamline the work of human disinformation operators, facilitate micro-targeted propaganda campaigns, and enhance the explosive potential when leaking hacked documents.

Despite these hazards, large language models are being developed and deployed by companies and countries worldwide and are increasingly open-source. Because there is no technology yet capable of comprehensively identifying synthetic text online, malign actors may already be using these models to augment their disinformation operations. As GANs and large language model capabilities advance, machine learning models have begun to shift toward the production of integrated combinations of text, video, audio, and still images, which will enable human-machine teams to produce high-quality and highly personalized disinformation-at-scale.

New machine learning techniques also enable the production of automated social media accounts — commonly called “bots” — that are better at mimicking human behavior, maximizing amplification, and avoiding detection. Conversational AIs are large language models capable of managing the “open-ended nature” of conversations, signaling a near future in which chatbots engage in seamless dialogue with humans. The AI research community typically makes their findings public, so other researchers can reproduce, learn, and build on their work. In turn, open-source training datasets released by bot detection services are equally available to threat actors who use them to develop more human-like bots.

These advances enable savvy disinformation operators to combine AI-powered chatbots with existing social listening and synthetic media capabilities to identify trending topics, develop a pool of human-curated messages, and deliver highly personalized narratives to targeted audiences. It will soon be possible for threat actors to train chatbots to specialize in specific trolling techniques or bring them to life with GANs-generated video skins that impersonate trusted sources. Soon, fully autonomous bot accounts will be produced en masse, improve rapidly with experience, and ceaselessly try to persuade, troll, and manipulate people online. As a 2017 report from the State Department’s Advisory Commission on Public Diplomacy predicts, machines will then process the expanding corpus of bot-generated content, producing a vicious cycle in which devices will talk to, at, and over each other, progressively “drowning out human conversations online.” It is unclear to what extent these fears have been realized over the five years since that report was released – but claims about the prevalence of bots currently operating on social media platforms have already thrown a wrench into the proposed sale of one central platform (Twitter). The continued inability to determine the extent of Twitter’s bot problem, even with a $44 billion deal on the line, illustrates the profound challenge of identifying AI-generated content at scale.

And yet, it seems that the potential deluge of AI-enhanced disinformation has not drowned out human voices yet. Indeed, by our current best assessments, during the highly contentious 2020 U.S. presidential election, there was a notable absence of politically-motivated deepfake content (with some exceptions). Instead, the most substantial ongoing damage caused by synthetic media is happening in the personal sphere, disproportionately impacting women and marginalized communities. For now, simple editing software and techniques like attaching misleading descriptions to existing content are significantly more common than AI-generated synthetic media. As researchers from the Bulletin of the Atomic Scientists point out, manipulated media does not need to convince their audience of their realism to spread widely and influence human behavior.

In the high-stakes realm of geopolitics, authoritarian states have just begun integrating deepfakes and other AI-enabled media into their disinformation toolkits. For example, in the lead-up to its full-scale invasion of Ukraine, the Kremlin reportedly planned to stage a Ukrainian attack on Russian civilians as a pretext for invading — but the plan would have reportedly employed actors and corpses, not deepfakes. In other words, even well-resourced threat actors continue to opt for more traditional forms of deception over AI-generated forgeries. Even when employed, these technologies have not yet been effective. For instance, a deepfake depicting Ukrainian President Volodymyr Zelensky surrendering was uploaded to a hacked Ukrainian news website, but the video was quickly debunked. At the same time, the Kremlin has also pursued more subtle information warfare strategies that AI has augmented. For example, Russia has deployed swarms of fake accounts with AI-generated faces to bolster their credibility and parrot their talking points. Overall, the refinement of synthetic media generation will inevitably augment the disinformation capabilities of malign actors and become an increasingly routine aspect of online life.

Strategies for Combating AI-Enhanced Disinformation

Just as machine learning is used to amplify disinformation operations, other machine learning capabilities can be used to protect the information environment. Stakeholders are developing media provenance technologies to authenticate metadata — information about how, by whom, when, and where a piece of media was created and edited. Katarina Kertysova highlights how AI can identify patterns of words that indicate disinformation by analyzing cues from articles previously flagged as manipulative; similar feature-based detection capabilities can also be applied to identify synthetically-generated images and videos. Social media companies and a new generation of tech start-ups also leverage machine learning alongside human moderators to identify disinformation and bot accounts, although these capabilities are not 100 percent effective. The Defense Advanced Research Projects Agency has an ongoing effort to develop forensic systems to improve abilities to spot inconsistencies in deepfake content.

While authentication and detection tools are in development, they remain imperfect. There are at least two distinct problems confronting these capabilities: first, discerning whether media is synthetically produced, and second, whether the information communicated by the media is true or false. The techniques described above broadly address the first problem, but the second is significantly more challenging to automate. To discern whether media is true or false, an AI needs a sophisticated understanding of, among other things, power, “history, humor, symbolic reference, inference, subtlety, insinuation, and power.” Because detection systems lack these distinctly human abilities, threat hunting still relies primarily on tips from human actors, including government, media, and civil society partners — a detection system that raises its own questions about bias and fairness. Meanwhile, fully autonomous detection systems also make mistakes. They sometimes mistakenly block lawful and accurate content, which may impair freedom of expression and information. Even the most advanced detection capabilities remain susceptible to adversarial examples — “optical illusions” for machine learning models intentionally designed to make them misidentify images and videos. And as discussed above, it is incredibly difficult to determine whether a digital text was created by a human, a machine, or some combination of the two.

The most direct way to combat AI-enhanced disinformation is to focus on the infrastructure that facilitates its distribution. While Congress could pursue legislation that directly regulates social media algorithms, this approach must be carefully tailored to avoid serious constitutional hurdles. Instead, legislation grounded in content-neutral goals such as carefully labeling bot accounts, strengthening data privacy laws, mandating cross-platform interoperability, increasing algorithmic transparency, and fostering fair competition could offer potential avenues for mitigating disinformation and safeguarding online authenticity. The goal of regulation should be to give social media companies incentives to shift away from their ad-centric business models and take on a more significant role in protecting democratic information environments. At the same time, regulations should be careful not to impede AI research and risk weakening the United States’ hand in its technological competition with China.

Beyond these regulatory efforts, societal stakeholders should pursue counter disinformation strategies. Congress could allocate funding for new techniques to detect synthetic media, invest in local news organizations across the country, and require services that sell aggregated consumer information to vet potential purchasers. The U.S. government should leverage allies and partners to share information and best practices for detecting disinformation operations. Blockchain technology — a decentralized ledger to record information that is nearly impossible to alter after it has been created — has strong potential to help verify the provenance of digital content. Platforms and AI researchers should develop a publication risk framework to protect their open-source research from unethical usage and adapt cybersecurity best practices to counter disinformation. Traditional media outlets must develop similar strategies to avoid unintentionally amplifying disinformation operations. Critically, more resources, training, and equipment should be allocated to civil society organizations to cultivate a genuinely whole-of-society counter disinformation strategy.

Open societies must pursue robust and long-term initiatives to help their populations become more balanced and careful consumers of online information. Promoting “cyber citizenship” for all age groups will be the most effective long-term solution for achieving resilience to disinformation. These skills include media literacy, digital ethics, civics, and cybersecurity. The United States should incorporate lessons learned from successful digital media literacy programs worldwide, many of which it supported and funded. Cultivating cyber citizenship skills will help inoculate individuals against disinformation operations, whether AI-powered or not.

More broadly, as the Aspen Institute’s Commission for Information Disorder argues, spreading false or misleading information online is a byproduct of complex structural inequities that have corroded trust between and among communities. To truly address the spread of misinformation and stymie deliberate disinformation operations, open societies must address the widening gaps between the haves and have-nots by investing in their citizens’ long-term development and well-being.

The Existential Threat of AI-Enhanced Disinformation Operations

New AI capabilities are rapidly increasing the volume, velocity, and virality of disinformation operations. As they continue to improve and diffuse, they further threaten to erode trust in democratic governance and encourage citizens to doubt the possibility of truth in public life. The profound cynicism introduced by AI-enhanced disinformation can be used to fuel mob majoritarianism and create new opportunities for illiberal politicians to campaign on promises to restore “order” and “certainty” by curtailing free speech and other civil rights. Such an outcome would hasten what Timothy Snyder has dubbed a “politics of eternity” in which malicious actors “deny truth and seek to reduce life to spectacle and feeling.”

Open societies rely on a shared basis of factuality to function effectively, especially during inflection points like national elections and the organized transitions of power. How we collectively adapt to a world of AI-enhanced disinformation today will determine the future of liberal democracy, basic standards of truth, and our shared perceptions of reality.

Note: The opinions articulated in this publication are those of the author. They do not purport to reflect the opinions or views of any organization with which he is affiliated. 

Image: This picture taken on July 16, 2021 shows user Melissa messaging her virtual boyfriend – a chatbot created by XiaoIce, an artificial intelligence system designed to create emotional bonds with its estimated 660 million users worldwide – on her mobile phone in Beijing. – (Photo by WANG ZHAO/AFP via Getty Images)