(Editor’s Note: This article is the fourth installment of the Symposium on AI Governance: Power, Justice, and the Limits of the Law).

In certain tech circles, talk about military artificial intelligence (AI) has recently taken an unexpected turn. Previously, thousands of employees at Silicon Valley firms made headlines for refusing to develop AI for defense purposes; those who did work with the military generally kept quiet about it. Now a new set of voices has emerged, openly eager to send their AI systems to war — not only in the service of profit but also, as they put it, in aid of democracy.

Helsing, a military AI firm founded by Spotify CEO Daniel Ek, describes its mission as “artificial intelligence to serve our democracies.” In an op-ed last year Ek, along with fellow Helsing board member Tom Enders, argued that Europe’s best defense against “imperialistic aggression” lies with military software startups that “share the mission of keeping democracies from harm.” Alexandr Wang, a self-described “China Hawk” and CEO of Scale AI, told the Washington Post that his company sells to the military to help “ensure that the United States maintains [its] leadership position.”

Tech funders, whose positions in defense startups have been expanding rapidly in the last two years, have adopted a similar tone. David Ulevitch, a general partner at the storied VC firm a16z, which has stakes in various military startups, told one recent interviewer, “If you believe in democracy, democracy demands a sword.” Some in the sector have taken to comparing themselves to the 20th century physicists who gifted humanity the atomic bomb.

The intended target of these arguments is not the general public, but lawmakers who control military budgets and have the power to overhaul defense acquisition policy. For Western politicians in these positions, the logic is hard to resist. If AI really is key to the “arsenal of democracy,” as the startup Anduril put it in a report published in 2022, a vote against AI is a vote for authoritarianism. Anything that slows development of the technology is a win for Russia and China.

Given the rapid pace of AI development and the deteriorating relations between the global superpowers, this all might seem hard to deny. But the “AI for democracy” argument has some dangerous flaws. Specifically, it hinges on three misguided assumptions: (1) AI will be a decisive factor in a near-peer conflict; (2) AI will enable new modes of war that are ethically superior to conventional warfare; and (3) the threats that military AI poses to one’s own democracy can be easily mitigated. If governments swallow these assumptions uncritically, they risk setting AI policy that makes the world both less secure and less democratic.

Assumption 1: AI Wins Wars

The idea that AI wins wars is the most unquestioned among all the precepts of AI power theory. And yet concrete proof of it is still scarce. In reality, the technology commonly referred to as AI—that is, machine learning—has not yet been widely adopted for warfighting functions, though militaries have been investing in it for more than a decade.

According to one recent study, drones and robots can only have a fraction of the AI capabilities of the kinds of programs we have on our desktops and in our smartphones. Large language models could start seeing use in intelligence retrieval and summarization at a command level, but the technology is still far too unreliable to be trusted with the kinds of major battlefield decisions that would turn the tide of a major conflict.

The fact that a computer can beat humans at complicated computer games is not proof that it can beat soldiers at war. We need hard evidence from the ground. But the sector has been reluctant to provide receipts. Alex Karp, the CEO of Palantir, suggested in February that software is giving Ukraine an edge in its fight against Russia. That’s a major claim. But the company hasn’t shared any details or metrics. Meanwhile, the Pentagon will not say which algorithms, if any, it uses in its own operations.

A doubly flawed tenet of the AI for democracy argument is that AI can only be defeated by AI. The U.S. National Security Commission on AI, which was led by Google ex-CEO Eric Schmidt as part of his multi-pronged campaign for digital transformation in Washington, has written that “defending against AI-capable adversaries operating at machine speeds without employing AI is an invitation to disaster.” Maybe it is true that if your enemy is building a fleet of battleships you should probably build one too.

But the most effective way to beat an adversary’s AI might simply be to destroy the communication networks that allow it to receive and share data. One could stump a strategic AI decision tool by engaging in tactics that were not covered by its training dataset. Computer vision systems for targeting can probably be fooled with inventive camouflage. Russia’s most effective defense against Ukrainian drones is not its own fleet of drones, but rather its use of radio jammers to block signals on which drone attacks depend.

Assumption 2: AI Warfare is Noble

It has long been argued that military AI warfare will be more precise, more predictable, and less costly than conventional warfare. Many say AI will have a super-human ability to discriminate between combatants and civilians. That it won’t fall victim to the emotions that lead human soldiers to commit atrocities. It won’t get tired, they point out. It won’t get angry. It won’t thirst for revenge.

This complements the AI for democracy argument well. However, it does not reflect the reality of how machine-learning based systems behave. Complex forms of AI exhibit unpredictable emergent behaviors. As a recent U.K. government report put it, AI can engage in harmful, self-defeating goal-oriented actions. AI can also be hacked and manipulated. This would all make AI-enabled militaries less governable than well-trained, human forces.

The governments and contractors who promise that they will make their AI systems predictable, explainable, and secure rarely acknowledge that this will come at the expense of the technology’s tactical usefulness. An AI system that is predictable will be less powerful than one capable of engaging in tactics that no human could ever conceive. An algorithm which must run everything by a human won’t be able to make decisions at “machine speed.” A pilot will have less meaningful control over a squadron of 10 autonomous drones than she would over a solitary robotic sidekick.

The tactic that military AI is predicted to be most useful for is the swarming of vast numbers of disparate, mostly autonomous weapons. Not tens, but thousands. To beat a near-peer rival, the AI software tools that companies are pitching will need to be backed by huge numbers of bombs and missiles, too. This way of war won’t be clean. It won’t be safer for civilians. It will be cataclysmic in its messiness, nightmarish in its unpredictability, and unprecedented in its inhumanity.

Assumption 3: AI’s Threat to Democracy is Easily Mitigated

The story of runaway technology has been written many times before, both in ancient and recent memory. The United States was the first military force to deploy drone technology extensively; now drones have been used to commit or enable destabilizing atrocity crimes within and beyond many warzones. The technology also has become an instrument for surveillance and intimidation in more than one democracy.

The potential applications of military AI for surveillance and atrocity crimes are far broader. Even if the technology is used without malign intent, it can erode the laws and principles that distinguish ethical militaries from unethical ones. For example, across the spectrum of military use-cases, increasing the use of AI could, as a result of the technology’s inherent unpredictability, challenge the measures of military accountability that are the necessary hallmark of any democracy’s armed forces.

Even if AI-enabled technology could be designed to act predictably, according to the International Committee of the Red Cross machines cannot perform the legal judgments mandated by the laws of armed conflict. Meanwhile, the seeming reluctance of governments and companies to inform the public about the use of AI in war runs contrary to the principle of transparency—a central tenet of AI ethics, not to mention liberalism.

Building AI that is both safe and accountable when used by one’s own forces, while also being unlikely to fall into the wrong hands, is not a quick nor easy job. It requires testing techniques as of yet unknown to science, vast organizational reforms and personnel training, new instruments of accountability and transparency, and algorithmic audits that span the full development and deployment pipeline.

Making military AI safe for democracy also requires, at its foundation, democracy itself. The Special Competitive Studies Project, a U.S. government-backed panel that emerged from the National Security Commission on Artificial Intelligence, has proposed that Western powers use a democratic process to decide how much AI risk is acceptable. That sounds great in theory.

But democracy, just like algorithmic testing and auditing, is slow by design. It is necessarily recursive and pondering. It is a tortoise to the hare that is authoritarian rulemaking. Many players (and no doubt a few lobbyists, behind closed doors) have stressed that this puts the West’s illiberal foes ahead in the AI race. It probably does. And that’s the crux of the AI-for-democracy problem.

Just as the most effective defense against AI is not AI, the best stalwart against illiberalism is never an illiberalism of one’s own. The Pentagon has recently announced a new push to develop autonomous drones, aimed squarely at offsetting China in a potential conflict, that it says will be significantly faster than its traditionally cautious development process. Within two years, it wants to field thousands of autonomous drones. If AI supremacy is in fact compatible with democratic principles such as transparency, meticulous vetting, and public buy-in, this would be a good opportunity to prove it.

For their part, those openly agitating for an AI arsenal of democracy should re-examine their arguments. It is natural for major powers to pursue military superiority by means of new technologies. It is also natural for those positioning themselves to profit from these races to couch their work in ideological terms. But as I’ve written in the past, too much of mainstream AI policy discourse is underpinned by faulty assumptions that leave little room for debate and the exploration of alternatives. In the case of military AI, leaving these assumptions unquestioned will not only give rise to otherwise preventable perils, but it will also undermine what the technology is supposed to be defending – democracy itself.

IMAGE: Digital map of the Americas (via Getty Images).