A semi-autonomous X-47B drone aboard the aircraft carrier USS George H.W. Bush in 2013. Image Credit: US Navy via Wikimedia.
What is the appropriate role of autonomy and human control in the use of force? How much human control, and what type of control, is necessary and appropriate?
Over the past year of discussions on autonomous weapons, the notion of “meaningful human control” has increasingly gained traction. Originally put forward by the British NGO Article 36 in a 2013 report, it has since then been echoed by other NGOs and even some states.
Yet, what is “meaningful human control?” There is no clear definition or agreement at this point, although, as the UN Institute for Disarmament Research points out, “the idea of Meaningful Human Control is intuitively appealing even if the concept is not precisely defined.” Without a clear definition, however, meaningful human control risks being only a pleasant-sounding catchphrase. At best, it merely shifts the debate to, “what is meaningful?” At worst, failure to define the term clearly could, if embedded in international discussions, lead to flawed policy choices.
Perhaps more fundamentally, given that the laws of armed conflict already lay out a number of principles and rules that apply to the use of weapons during armed conflict, what is gained by adding the concept of “meaningful human control?” Or, to put it another way, what problem or problems do autonomous weapons raise that existing principles under the laws of war are insufficient to address?
As part of our ongoing Ethical Autonomy project at the Center for a New American Security, we assessed statements made by those advocating for meaningful human control and studied the use of weapons today, which have varying degrees of autonomy. We found that those advocating for the concept of meaningful human control raise three concerns with regard to autonomous weapons:
- Autonomous weapons could create an “accountability gap.”
- Autonomous weapons could lead to an off-loading of moral responsibility for killing.
- Autonomous weapons could be designed and used in such a way that results in them being “out of control” on the battlefield.
Thus, any definition of meaningful human control ought to address these issues. In addition, a usable concept of meaningful human control must account for the varied ways in which weapons have been used to date and should address what is potentially new about greater autonomy in weapon systems.
For example, at present, a fighter pilot in an engagement might have only seconds to decide whether to fire a missile at an enemy fighter. When that engagement occurs at beyond-visual-range, the pilot has meaningful human control even though the pilot makes the decision to fire entirely based on information received from sensors and computer processors — machines — and computers then guide the missile onto the target. It is critical to understand what, if anything, differs as that process becomes more automated.
Understanding how weapons are used today is also vital to understanding what is new about autonomy that raises concern about meaningful human control. Since autonomous weapons, from the perspective of the Campaign to Stop Killer Robots, are generally an issue of concern for future systems, then a definition of meaningful human control that rules out large swathes of weapons that have been used without controversy for decades undoubtedly misses the essence of what is new about autonomy that warrants thinking about meaningful human control.
An examination of how weapons are used today points the way to three essential components of meaningful human control:
- Human operators are making informed, conscious decisions about the use of weapons.
- Human operators have sufficient information to ensure the lawfulness of the action they are taking, given what they know about the target, the weapon, and the context for action.
- The weapon is designed and tested, and human operators are properly trained, to ensure effective control over the use of the weapon.
Turning these components into standards of meaningful human control would help ensure that commanders are making conscious decisions and that they have enough information when making those decisions to remain legally accountable for their actions. This also allows them to use weapons in a way that ensures moral responsibility for their actions, and could provide a basis upon which to design accountability standards for weapons that incorporate greater autonomy. Furthermore, appropriate design and testing of the weapon, along with proper training for human operators, helps ensure that weapons are controllable and do not pose unacceptable risk.
While these three criteria help point the way toward an improved understanding of what meaningful human control entails, they are merely a starting point. There remain many outstanding issues. These include the level at which meaningful human control should be required, whether meaningful human control is an overarching concept to ensure compliance with the laws of war or a new principle in its own right, and whether meaningful human control is even the right concept for thinking about human control over autonomous weapons. Further discussion and dialogue is needed on autonomy and human control in weapon systems to better understand these issues and what principles should guide the development of future weapon systems that might incorporate increased autonomy.