The decision to kill other humans lies at the heart of concerns over Autonomous Weapon Systems (AWS).  Human judgment regarding whether lives will be taken and objects destroyed during armed conflict inherently triggers an evaluation under International Humanitarian Law (IHL) as to the lawfulness of an attack.  As the link degrades between human interaction and lethal action by weapon systems, how can legal advisors evaluate who “decided” to kill?  Is it possible that human control over AWS might be diluted to the point where it would no longer be reasonable to say that a human decided that such a weapon would kill?

A team of active-duty military and civilian professors at the Stockton Center for the Study of International Law at the U.S. Naval War College recently completed a research project that addressed these and other questions regarding future AWS.  One aspect of the study focused on the intersection of artificial intelligence (AI) and machine learning in autonomous weapon systems with IHL. Machine learning, in particular, presents a unique set of issues that challenge traditional concepts of control over weapon systems.  In this regard, the project helped distill a specific question central to an IHL evaluation of AWS.    

When human decisions with legal significance are effected indirectly through machines, what does it then mean – from a technological perspective – to “decide” who is killed?  When a soldier bayonets an enemy fighter on the battlefield, the blade effects the soldier’s decision in the most direct way.  Torpedoes, cruise missiles, and over the horizon air-to-air missiles are farther removed but are still simply and logically traceable to a human’s decision to kill.  But consider the hypothetical case of an unmanned submarine that was granted the authority to attack hostile warships after spending days, weeks, or even months without human interaction.  Suppose the submarine was programmed to independently learn how to more accurately identify potential targets.  The link between human decision-making and lethal kinetic action gives us pause because it is attenuated.

As such, when some commentators speculate about future AWS equipped with sophisticated AI, they ascribe decisions to machines.  But even advanced machines do not decide anything in the human sense.  The hypothetical submarine above, for example, was granted a significant degree of autonomy in its authority and capability to attack targets.  Even if the system selects and engages targets without human intervention, however, it has not made a decision.  Humans programmed it to achieve a certain goal and provided it some latitude in accomplishing that goal.  Rather than focusing on human interactions with autonomous weapons, commentators’ inquiries should center on whether we can reasonably predict the effects of an AWS.

The decision to kill must not be functionally delegated to a machine by granting it authorities or capabilities that prevent us from predicting the death and destruction it will inflict.  Functional delegation does not imply that machines are making decisions.  Instead, it means that humans have abrogated their duty under IHL to decide who dies by creating an unpredictable weapon.  Of course, any weapon that is not reasonably predictable is per se unlawful under IHL, because by definition, such a weapon would be indiscriminate in multiple ways – it is a reality is not unique to AWS.

This does not necessarily mean that humans must provide input to future AWS at a point that is temporally proximate with lethal action in order for it to comply with IHL.  First, such a requirement is not levied by IHL.  Second, from an operational perspective, it might also prove counter-productive in the event of future conflict with a near-peer competitor.

Let’s explore why a blanket requirement of human input that is temporally proximate to lethal kinetic action is unnecessary from an IHL standpoint.  An anti-tank land mine may remain in place for an extended period without activating.  Yet, such systems are not indiscriminate per se.  Indeed, if future land mines were equipped with learning capacity that somehow increased their ability to positively identify valid military objectives, this could potentially enhance the lawfulness of the system.  As such, the analysis of the legality of an AWS will turn in large part on whether its possible to reasonably predict the target or class of targets the system will attack.  The determination will depend on the specific authorities and capabilities granted to the AWS.  If the lethal consequences of an AWS’ actions are unpredictable, the decision to kill may have been unlawfully delegated to a machine.

Moreover, future military commanders may need to address threats that are too numerous and erratic for humans to respond.  For example, China is allegedly developing unmanned systems that could operate in swarms to rapidly overwhelm a U.S. aircraft carrier strike group.  In order to address such threats, future forces will likely need scalable options to fight at “machine speed.”  If a commander is forced to rely on affirmative human input in order to use force against each individual threat, the battle may be over before it has begun.

As Rebecca Crootof and Frauke Renz aptly point out, policy and regulations are needed in order “to evaluate and proactively address risks associated with increasing autonomy in weapon systems, to preserve the law of armed conflicts’ humanitarian protections, and to minimize human suffering and death.”  Such guidance also facilitates those involved in the weapons development and procurement process to seek out new systems that will ensure our national security while adhering to the principles mentioned by Crootof and Renz.

The challenges presented by AWS cut across multiple technical and professional domains.  In order to develop rational and informed AWS policy, the legal concepts such as those described above must therefore be linked more directly to reasonably foreseeable AI and machine learning technology.

As such, these challenges will only be solved using a multidisciplinary approach.  Technical experts in computer science must assist IHL specialists in describing with greater specificity how, from a technological perspective, the decision to kill could inadvertently be functionally delegated to a machine.  Deeper understanding must be achieved, for example, of how mission-type orders might be carried out by AWS that are capable of learning, and in what ways those systems might be unpredictable.  Technical methods should also be explored for limiting the capabilities and authority of AWS in order to ensure they are bounded to meet relevant performance standards.  In short, lawyers need to learn to speak AI, computer scientists must learn to speak IHL, and policymakers need to be fluent in both.

Based on the number of permutations that future AWS might take, these tasks will be extraordinarily difficult.  A “wait and see” approach, however, is unacceptable.  Our national security interests and humanitarian ideals are too consequential to defer these questions to future generations.

Image: U.S. Government