(Editor’s Note: This article is the second installment of the Symposium on AI Governance: Power, Justice, and the Limits of the Law). 

For those attempting to sort through the threats, opportunities, and outright hype posed by artificial intelligence (AI), the present moment is daunting, if not disorienting. If industry leaders are to be believed, AI is poised to upset long-held paradigms about every aspect of human activity, from labor to creativity to warfighting. On the latter front, conflicts from Syria to Ukraine to Armenia indeed seem to nod toward the future envisioned by a prominent U.S. general: datafied battlefields with near-instantaneous targeting, firing, and battle-damage assessment, minimizing the need for human intervention and oversight.

That general was William Westmoreland in 1969, which suggests that such a future might also be the military analog of cold fusion: always somehow just around the bend, though never quite within reach. But as a recent U.N. report helpfully reminds, the military implications of AI are not narrowly confined to tactical targeting and operational engagement. Automation is likely to permeate more strategic-level tasks, from command and control (C2) to  management, logistics, and training – for which the integrity, quality, and veracity of underlying data will be pivotal. Even beyond these high-level functions, to the extent that human judgment will remain central to strategy, diplomacy, statecraft, and international relations, AI poses pivotal questions about another paradigm –  intentionality.

Making the “Fog of War” Denser

Prominent international relations theorists from Schelling to Jervis have scrutinized the role of signaling and perception between and among states. The implications are often existential: particularly for nuclear powers, how military and national security leaders sense and react to the moves of their foreign counterparts have profound ramifications for avoiding crises and prevailing in conflict. Even the terms used to describe these dynamics — “rational actor model” or “game theory” — underscore the central role of human intentionality and interpretation. For example, during the Cuban Missile Crisis, military scholars note that “the central question for President Kennedy wasn’t whether or not there were nuclear-capable missiles on the island. Rather, the question was how far the Soviet Union was willing to go…Resolve — not estimates of the number of missiles, doctrinal processes for firing them, and probable flight times and targets — is what decision makers needed to understand.”

Construing adversary resolve was never foolproof – humans are fickle, unpredictable, and even strategically self-defeating. They are also evolutionarily hard-wired to assume intentionality where none exists. In a world where decision-making is expected to become increasingly automated, the margins for unanticipated and unintended consequences may also widen. A world in which such technologies increasingly mediate both warfighting and battlefield intelligence could, paradoxically, diminish states’ ability to effectively send and discern signals. Military theorist Martin Van Creveld examined this relationship between technology and warfare, concluding that “efficiency, far from being simply conducive to effectiveness, can act as its opposite.”

Technology designed to reduce the fog of war might only make it grow denser.

RAND political scientist Michael Mazarr calls such technological fog-inducing problems “predatory abstract systems” — like an automated phone-tree or online chat-bot — characterized by stacks of self-replicating rules and procedures that are indecipherable to outsiders (and probably insiders, alike). Such systems rule out “any need for conspiring wizards behind the curtain. At a certain point, the webs of structured interaction become so dense and self-perpetuating that any malign elites or overlords could leave the scene and the machine would keep churning along.” The result, for decisionmakers from the battlefield to the commander’s chair, is a complicated and uneven trust relationship with automation. For example, research suggests that, in crisis situations, national security experts may be less inclined to escalate in response to human errors (like an accidental shoot-down of an aircraft), but less forgiving toward AI-directed systems. In other words, retaliation may stem mostly from the desire to “punish the rival for delegating lethal decision-making to a machine.”

As driverless cars continue to demonstrate on American streets – the greatest risks from AI may result as much from flaws in their design as from any deliberate ill intent; their rigid adherence to the right rules in the wrong (and potentially unforeseeable) scenarios. “Error is as important as malevolence,” says former Navy secretary Richard Danzig. In terms of AI, one can attribute this to poorly specified goals or rewards, or on applications being developed in simulated environments and trained on past scenarios, neither of which are necessarily comprehensive nor indicative of future activity. Moreover, the more complex AI systems are, the blurrier the lines of authority for decisions (and responsibility for their consequences) become. The relevant human actors – operators, regulators, and designers – all coalesce into a relatively novel collective agent called AI, which, much like “the market” or “bureaucracy,” is difficult to pre-emptively check, or hold accountable post-hoc.

The delegation of authority to technology has historically been driven by the desire to remove human fallibility from the equation, making the conduct of war more precise, less violent, less destructive, and less costly – while somehow retaining its ability to accomplish strategic objectives. But this tradeoff was likely always illusory, an “epistemological flattening of complexity into clean signal for the purposes of prediction,” per AI researcher Kate Crawford. Subjectivity and ambiguity are an immutable part of the human condition – concealable, perhaps, beneath layers of algorithm and automation, but still omnipresent, lurking in the selection and categorization of underlying data and application design. This is not a feel-good veneration of human reason (which frequently grapples with irreducible complexity and uncertainty), but rather a reminder to distinguish it from the type of “reasoning” exhibited by AI: finding patterns in a dataset, curated and deemed by its designers to be sufficiently representative of past experience to lend such patterns predictive power. As Keren Yarhi-Milo, Dean of Columbia’s School of International and Public Affairs asserts, such “simplified models of reality” can draw attention toward indicators that make sense within the confines of a given dataset, but could be misleading or irrelevant to real-world contexts. Meanwhile, in the midst of national security crises, decisionmakers have neither the time nor inclination to   sanity-check training models.

Striking the Right Human-Machine Balance  

However enthusiastically militaries might embrace AI, command decision-making is not merely synonymous with statistical inference, battlefields are not merely a bundle of unstructured data points, nor does human behavior and cognition follow any fixed laws like Newtonian physics. MIT professor Elting Morison expressed concern in 1966 that such techno-centric thinking might erode “our sense of the significance of the qualitative elements in a situation…and that the computer which feeds on quantifiable data may give too much aid and comfort to those who think you can learn all the important things in life by breaking experience down into its measurable parts.” To ascribe (or to pursue) such a rubric of human behavior would be to, in Morison’s telling, “fit men into machinery rather than fitting the machinery to the contours of a human situation.”

In this regard, decreased human intervention in  military contexts probably necessitates more of it in diplomatic ones. Escalation dynamics are poised to become even less controllable in an era where states could feasibly threaten automated kinetic responses to perceived violations of certain thresholds (e.g., violations of airspace or contested waters). During the Cold War, trust deficits between adversaries could be at least partially overcome through careful negotiations and confidence-building measures specifically aimed at managing threat-perceptions and expectations. AI-assisted weapons and intelligence are likely to lend even more urgency to these bilateral and multilateral contacts, as much to navigate the tensions of the wholly unintended as to mitigate explicit ill intent.

Ultimately, national security officials must determine – and communicate to allies and adversaries alike – what the “the appropriate division of labor between humans and machines” looks like, and what policies and norms ought to stem from that. There is no guarantee of success nor of good faith on everyone’s part in that process, but the addition of more automation will only compound epistemic uncertainty, missed signaling, and a lack of accountability for mistakes. Industry leaders promise  AI will enhance decision-making in warfare by changing the way we translate data into knowledge. Insofar as national security leaders and practitioners aim to participate in this paradigm shift, notions of what constitutes signaling and intent must similarly evolve if states are to avoid misperceptions, escalation, and an automated slide into war.

IMAGE: Digitial conceptualization (via Getty Images)