A new report from the Pentagon’s Defense Science Review Board (DSRB) provides a useful examination of the technical issues concerning the fielding of autonomous weapons. Unquestionably, these technical issues have a direct bearing on associated legal issues, and this report will serve as a valuable resource for those examining the legal aspects of autonomous weapons.  

The Summer Study on Autonomy is the board’s second report on autonomy in recent years, following the 2012 release of The Role of Autonomy in DoD Systems.  Collectively, these two reports are the most comprehensive examination of potential military applications of autonomous technology.  While the new report provides an expansive vision of autonomous military systems, it also catalogues a number of impediments to the realization of these technologies. One area that poses challenges — as well carries legal ramifications — is testing.

A requirement exists in customary and treaty law to conduct reviews of weapons—autonomous or otherwise—in order to ensure they are lawful (see also the U.S. position in paragraph 6.2 of the DoD Law of War Manual). Highly advanced weapons raise a number of issues with regards to weapons testing:  how can one review a weapon that is so complex that it is literally impossible to test all lines of code, how can a test replicate the infinitely complex world of cyber-space, how are technical results from tests translated into a narrative that a commander or legal advisor can understand?  

Wisely, the report acknowledges this reality (“DoD’s current testing methods and processes are inadequate for testing software that learns and adapts”), and makes a series of recommendations that are designed to address these complications.  The report does not address other legal issues raised by autonomous technologies. For instance, can an autonomous system conduct a proportionality analysis, particularly with regards to determining military advantage?  

Instead, the current report responds to a 2014 directive from the Under Secretary of Defense for Acquisition, Technology, and Logistics to study “the science, engineering, and policy problems that must be solved to permit greater operational use of autonomy across all warfighting domains.” The DSRB reaches two overarching conclusions.  First, it concludes “that autonomy will deliver substantial operational value across an increasingly diverse array of DoD missions….”  Second, the DSRB finds that autonomous systems “must be designed so that humans (and/or machines) can straightforwardly determine whether, once it has been deployed, it is operating reliably and within its envelope of competence—and, if not, that appropriate action can be taken.”  Both conclusions are undoubtedly true.  Governments (the U.S. and many others) and private companies are pursuing autonomy in a wide-variety of applications.  The realization of these technologies in the near term—particularly in the application of force—will be dependent on successful machine/human teaming.  The better this relationship, the more control human operators will have over autonomous weapons, and thus the wider the scope of application of such weapon systems.  

The first section of the report considers the ever-thorny issue of defining autonomy.  Throughout, the report speaks of autonomy in the context of capabilities.  This is a wise approach as it recognizes autonomy is not a technology per se, but rather a capability comprised of technologies.  The report also draws a useful distinction between autonomy in rest vice autonomy in motion, where the former refers to autonomy in software operations, and the latter autonomy in physical movements.

The heart of the report considers four capabilities which may be automated to various degrees—sense, think/decide, act, and team—and the various technical and operational issues associated with each.  Dividing autonomy into discrete functions is not unlike other Defense Department approaches that have sought to define autonomy in the context of a decision making cycle such as the Observe, Orient, Decide, Act (OODA) loop (see here for a good discussion on the issue).  The DSRB report importantly adds the capability of “teaming,” which the report considers to be either human/machine teaming or machine/machine teaming.  This is a significant twist on traditional concepts of autonomy and serves to emphasize the centrality of the human-machine relationship.  Autonomous weapons systems are commonly seen as machines operating without human oversight or involvement.  In reality, as the report makes clear, all autonomous systems in the near future will be teamed to some extent with a human.  The fundamental importance of the human-machine relationship has been acknowledged by a number of countries, including the U.S., the U.K, and others.  

Teaming, sometimes referred to as centaur warfare, is seen by many as a way to ensure human control over the autonomous system and thus potentially ameliorate some of the concerns expressed by civil society organizations such as Human Rights Watch.  Interestingly, the report points out that teaming may in fact operationally enhance the system by providing “additional opportunities for human-machine partnership” in that it allows for control over the machine while providing flexibility to “expand the original operational context to allow for new missions and environments.” Unteamed machines in the same context would be bound by the original programing and could not adapt to changed circumstances.  A similar observation can be found in the context of the recently concluded DARPA Grand Cyber Challenge, where machines autonomously hacked one another in a controlled game environment.  After the conclusion of the game, the winning group of programmers noted that their autonomous machine would have performed better with human teaming.  

Successful teaming requires consideration of two related issues.  The first is the prospect that machine learning will cause behaviors to change over time, leading to incongruities between expected and actual machine performance.  The second is the need to establish a commonality of understanding between the machine and the human.  This is particularly pertinent—and challenging—with regards to establishing a common understanding of the goals of the human and machine; goals which “may be expressed in different frameworks and semantics.”  

The report provides a significant discussion regarding the issue of trust between machine and human, a particularly acute issue in the military where systems operate in “complex, unpredictable, and contested environments.”  The report suggests adopting an “integrated approach” which considers the notion of trust in all aspects of the development and lifecycle of the autonomous weapon.  This approach, however, would “require a transformation of the conventional model of” testing and evaluation “from discrete segments of the acquisitions cycle to an ongoing evaluation and evolution of the technology and concepts within the operational community.”  This is an important insight as it highlights the inadequacy of the U.S. defense acquisitions process for a weapons system which potentially evolves over the course of its lifecycle.

The way commanders currently track military operations is also likely to be rendered obsolete by autonomous weapons.  In modern military operations, commanders rely on a common operating picture (see U.S. doctrine here) which depicts relevant objects and events.  With the increasing complexity of the modern battlefield, militaries have been struggling to find technological solutions that maximize commanders’ situational awareness over the battlefield.  The U.S. Army, for example, utilizes a system called Command Post of the Future.  Autonomy, as the report notes, creates unique complexities given the speed at which it operates and difficulties in achieving a commonality of understanding between the machine and the human.  The organizers of the DARPA Grand Cyber Challenge also realize the challenge of visualizing the actions of autonomous systems in real time, and spent considerable resources developing a suite of programs designed to visualize computer actions.  

Given the report’s emphasis on human-machine teaming, it is worth considering briefly the implications of this relationship.  First, from a practical perspective consider that the nature of the human-machine relationship will differ for each system, and may in fact differ from mission to mission.  For example, a system may be designed such that the human operator can progressively cede control over certain functions to the machine as the situation becomes more complex.  Thus, the complexity of a mission or the environment—largely unknown at the outset—might determine the degree of autonomy ultimately exhibited by the system.  This potentiality introduces a circumstance in which a superior may order the employment of a system without full knowledge as to the degree of autonomy which the system will ultimately use.  

This then raises questions of accountability.  While some have argued that this issue is insoluble, I believe the current normative framework is sufficient to address accountability concerns.  Among other avenues of accountability, I believe a superior could be held responsible where they knew or should have known the system would operate in an unlawful manner.  Accountability, through superior responsibility or otherwise, is complicated when a human operator is tied—to some degree—with the operation of an autonomous system.   In such circumstances, one must consider the role of the operator relative to the commander, relative to the machine.  Was the system being deployed into an environment in which the superior (or operator) knew would require a level of autonomy beyond the lawful capability of the system?  More fundamentally, as an operator cedes functions to the machine, do they incur more or less responsibility for the machine’s actions?  What if the machine assumes greater control over the situation on its own initiative?  Was the possibility known…by the operator…by the superior?

These are just some of the questions that arise from the new DSRB report, and ones the Defense Department will wrestle with for years to come.