Autonomous weapon systems, that is, weapon systems capable of independently selecting and engaging targets, are usually portrayed in one of two ways. Either they are depicted as some kind of Terminator-like robotic soldier or as merely a more independent version of a weapon already in use. But both of these analogies miss what is fundamentally new – and difficult to regulate – about autonomous weapon systems.

Comparisons like these are often useful: analogies and allusions to existing weapons and popular culture make new technology accessible, identify potential dangers, and augment desired narratives. Perhaps most importantly, analogical reasoning is a tried-and-true method of stretching existing law to cover new technologies and avoid law-free zones. “Horseless carriages” allowed people to adjust to the idea of gas-powered vehicles; today’s “driverless cars” are helping people envision some benefits and risks of autonomous vehicles. Just a few years ago there was wide disagreement about whether cyber was a law-free zone, but analogies to physical spaces contributed to the growing consensus that international law governs state action in cyber “space.”

However, as I discuss in a forthcoming paper, there is no appropriate legal analogy for autonomous weapon systems. All potential analogies misrepresent critical traits of autonomous weapon systems: thinking of them as weapons minimizes some autonomous weapon systems’ capability for independent and self-determined action, while the combatant, child soldier, and animal combatant comparisons overemphasize it. Furthermore, all of these analogies limit our ability to think imaginatively about this technology and anticipate how new kinds of autonomous weapon systems might develop. Rather than being a single, embodied entity, autonomous weapon systems will likely take a variety of forms, ranging from disembodied malware to networked systems of sensors and robots (an “Internet of Weapons”). These different forms and capabilities will affect how autonomous weapon systems can or should be regulated.

The primary forum for the international discussion on regulating autonomous weapon systems has been a series dubbed, the “Meeting of Experts on Lethal Autonomous Weapon Systems,” hosted by states party to the Convention on Certain Conventional Weapons (CCW) – the treaty banning or restricting the use of land mines, blinding lasers, and other weapons. These widely-attended meetings have highlighted the regulatory issues posed by autonomous weapon systems. But discussing this unconventional technology in the context of traditional weapons is misleading. Instead, as is often the case when law by analogy is insufficient, what is needed is new law – which would be best developed in a different forum.

Weapons and Combatants

Autonomous weapon systems are most often discussed and evaluated as weapons or combatants, but both of these characterizations are fundamentally inaccurate. If they are weapons, they are unique insofar as they have the capacity to take independent and sometimes unpredictable action. If they are combatants, they are unique insofar as they are not driven by human motivations, they might or might not be constrained by human morality, and they may sometimes have their capacity for independent action sharply curtailed, either by their deployers or by hackers. 

 

The fact that autonomous weapon systems are neither traditional weapons nor traditional combatants complicates the debate over how best to regulate them. The law of weapons regulates physical design and capabilities, while the law governing combatants attempts to direct or constrain behavior through a combination of training and accountability measures. Weapons are lawful or unlawful; combatants may act lawfully or unlawfully. But the underlying assumptions of these legal regimes do not hold when applied to autonomous weapon systems, as they are neither dependent tools nor self-governing human beings.

That doesn’t stop us from using the weapon/combatant analogies to make legal arguments. Indeed, selecting between these characterizations predetermines the answers to most of the difficult and seemingly perennial legal questions associated with autonomous weapon systems: What level and kind of legal review is sufficient for a weapon system with emergent capabilities? What constitutes meaningful human control over a decision to attack? Who can or should be held accountable if an autonomous weapon system’s action results in a serious violation of international humanitarian law?

Consider the debate on whether autonomous weapon systems will comply with the distinction requirement. Parties to a conflict must distinguish between lawful targets (combatants, civilians directly participating in hostilities, and military objectives) and unlawful targets (civilians, surrendering or wounded combatants, and civilian objects). If evaluated as weapons, it is clear that autonomous weapon systems may be used in a discriminate manner and are therefore not unlawful as a class. But if they are considered combatants, it is equally clear that autonomous weapon systems are (currently) incapable of distinguishing between lawful and unlawful targets and therefore incapable of reliably acting lawfully.

Accordingly, advocates for a complete ban, advocates for proactive regulation, and advocates for strategic procrastination shift fluidly between analogies, depending on the narrative point they wish to advance. Ban advocates simultaneously argue that autonomous weapon systems will be incapable of complying with the targeting requirements usually understood to apply to combatants and that it is possible to ban them under the CCW. Meanwhile, ban skeptics suggest that autonomous weapon systems can be used in accordance with laws governing weapons while concurrently observing that they may be more humane than human soldiers. These narrative inconsistences are not due to bad faith arguments – in retrospect, I have been guilty of perpetuating this confusion in my own writing – but rather are the natural byproduct of the fact that neither analogy is appropriate.

Child Soldiers and Animal Combatants

Child soldiers and animal combatants (animals that actively participate in hostilities, as opposed to serving in supportive roles, such as sentries or transporters) are alternative analogies that come closer to capturing what is unique about autonomous weapon systems, but they too ultimately prove inadequate. These entities are capable of autonomous action and by extension may sometimes take unpredicted action that results in serious violations of international humanitarian law; meanwhile, they cannot be held individually liable under existing international criminal law.

However, comparing autonomous weapon systems with child soldiers ignores the foundational reasons for regulating the respective entities: child soldiers are banned to protect children from one of the worst forms of child labor; at least at present, there is no similar need to protect autonomous weapon systems from participating in armed conflict (except, perhaps, for the sake of humans interacting with them).

Animal combatants, like the dogs trained to carry explosives under enemy tanks in World War II, may be the best analogy: they aren’t quite weapons, insofar as they are capable of independent action and that action can be modified through training; they aren’t quite combatants, as they cannot be taught the law of armed conflict, cannot act with the requisite mens rea for criminal liability, and cannot be held responsible for war crimes. But there is little written law on this subject: attempting to apply a “law of animal combatants” to autonomous weapon systems simply highlights its absence. 

The Limits of Analogy: Misleading and Constraining

As George Carlin observed, because we think in language, and the quality of our thoughts can only be as good as the quality of our language. Just as the term “driverless cars” constrains our ability to imagine the myriad forms autonomous vehicles might take, autonomous weapon systems may be structured in ways not suggested by any of the aforementioned analogies.

An autonomous weapon system might be a collection of networked systems, like the U.S. Navy’s LOCUST—low-cost UAV swarming technology—system, which can launch up to thirty small drones that communicate with each other to fly in formation and engage in “defensive or offensive missions.” It is possible to imagine vast systems comprised of central “brains,” widespread sensors, and varied unmanned aerial, underwater, or surface vehicles. Each component, individually, might not constitute an autonomous weapon nor present much of a challenge for traditional legal review procedures, but the collective capabilities would constitute an entirely new means of waging war.

Alternatively, autonomous weapon systems might take the form of “centaur corps”—human-machine teams designed to leverage the strengths of both entities. ALPHA is famous for having recently beat a retired U.S. Air Force colonel in multiple flight simulator trials, but it was designed to assist, rather than replace, human pilots—either by providing real-time advice or by flying protective “wingmen” UAVs. Some see human-machine teaming as the ideal; others are concerned that keeping the human in the loop will hobble U.S. forces.

States also continue to invest in research and development of powered, armored exoskeleton suits like the U.S. Special Operations Command’s Tactical Assault Light Operator Suit program (TALOS), designed with power-assisted limbs, 360-degree night vision sensors, and open architecture that will allow for a variety of add-on capabilities. Accordingly, some have argued that it may be necessary to expand the legal review for new weapons to encompass augmented or cyborg combatants.

In fact, autonomous weapon systems need not be embodied at all: a computer virus or a complex software system might also be a kind of autonomous weapon system—and autonomous cyberweapons will likely flourish long before physical autonomous weapon systems are widely deployed. DARPA has just held a cyber challenge wherein it invited hacker teams to create programs that can independently identify and patch security flaws in a system. A similarly autonomous program might be used to identify and exploit vulnerabilities in other systems.

As Lawrence Lessig famously stated, “code is law.” In other words, the architecture of a new technology is relevant to how it can be regulated. The law of weapons can be stretched to cover some kinds of autonomous weapon systems – but it is incapable of addressing all of the various challenges posed by the advent of these nontraditional forms.

The Need for New Law – and a Different Forum

As with other new technologies with destructive capabilities, states have a vested interest in creating a shared governing legal regime for autonomous weapon systems. And, because they are used in semi-autonomous modes or constrained environments, the majority of embodied autonomous weapon systems in use today can be analogized to other weapons and regulated accordingly. But as weapon systems with greater autonomy, emergent capabilities, or entirely new forms are developed and deployed, this legal regime will no longer be sufficient.

As is often the case when law by analogy fails, what is needed is new law – at the very least, new regulations for autonomous weapon systems, but perhaps this is also an opportunity to develop the law for all unconventional warfighters. A new framework treaty – call it a “Convention on Certain Unconventional Warfighters” – would create a space for states and civil society to address the unique legal issues posed by different entities participating in armed conflict that are neither traditional weapons nor traditional combatants.

Granted, there are significant political and practical roadblocks to shifting the discussion to a new forum, let alone developing a new multilateral treaty regime – but the sooner we escape the confines of these insufficient analogies, the sooner we can create comprehensive and effective regulations for this challenging new technology.

This post is a synopsis of the arguments made in Autonomous Weapon Systems and the Limits of Analogy, in The Ethics of Autonomous Weapon Systems (Claire Finkelstein, Duncan MacIntosh & Jens David Ohlin) (forthcoming 2017).