Today, an organization of scientists released a call for a preemptive legal ban on autonomous weapons systems (AWS) – those that can select and engage targets without human intervention. 

Although fully autonomous weapons able to carry out offensive operations do not yet exist, the development of increasingly autonomous weapons is considered by many to be very likely or inevitable.   

The call for a ban was signed by artificial intelligence experts, roboticists, engineers, and others from 37 countries, including numerous significant figures in the relevant fields.  Among the signatories are Mark Bishop (Chair of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB), Illah Nourbakhsh (Carnegie Mellon Professor of Robotics), Alan Bundy (elected Fellow of the American Association for the Advancement of Artificial Intelligence), and James Hendler (Computer Science Professor, and former DARPA Chief Scientist). 

The scientists’ statement articulates three reasons for a ban: autonomous targeting might undermine human responsibility and accountability for the use of force; technical limitations may mean that the weapons will not be able to comply with legal restrictions on the use of force; and complex autonomous systems developed by different countries might interact with each other in unstable and dangerous ways.

The call for a ban comes amid, and will likely add to, the increasing diplomatic and international community interest in discussing the many issues raised by AWS.  When an independent UN expert first raised concerns at the UN in 2010, the response from states was minimal.  A year later, the President of the ICRC described the potential deployment of autonomous systems as “a paradigm shift.”  In 2012, Human Rights Watch released a major report on AWS, and subsequently joined with many organizations to start a global campaign to ban the weapons.  In 2013, a further UN report – calling for a moratorium on AWS development and the creation of an international high-level panel to study AWS – received significant attention.  In the past four months, numerous states have expressed interest in engaging in further international discussion of the issues, including Algeria, Austria, Brazil, Egypt, France, Germany, Morocco, Switzerland, and the United States.  

The reasons for a ban highlighted in the scientists’ statement today are three of the numerous arguments frequently debated in the growing scientific, philosophical, arms control, legal, and human rights literature on autonomous weapons.  The issues are complex, and scholars and practitioners have made various legal, political, strategic, military operational, and ethical arguments for and against AWS (and for and against a ban in particular).  Some of these other experts are likely to argue strongly against this latest call for a ban, especially those who have written on the potential for AWS to be equally, if not more, ethical compared to human soldiers, and those who argue that in the face of scientific uncertainty, the weapons should not be preemptively banned.

In the AWS literature, core debated issues include the following:

  •  Legal debates focus primarily on two areas: whether AWS would undermine or promote compliance with international law (and, in particular, whether they would save or put at undue risk civilian lives); and whether the weapons would foster or undermine legal accountability regimes.
  • Political and strategic concerns include whether the weapons would lead to a destabilizing arms race, introduce further military inequality, lower the threshold for willingness to resort to force, or create blowback for civilians.  On the other hand, potential AWS benefits could include advantages to a state’s national security, deterrence against aggressors, and promotion of humanitarian intervention (by the Security Council or others) due to a lower threshold for using force.
  • Moral and ethical arguments also feature prominently. Some argue that AWS would be dehumanizing or destructive of human dignity, that human judgment should always be key to any lethal decision, that human emotions are an important check on military abuses, or that the lack of any risk to one side’s personnel may make AWS immoral.  In response, others have argued: human judgment would remain with AWS but would exist at an earlier (design) phase; direct human involvement in killing decisions is not a moral requirement; AWS could be more ethical because they could be programmed to have no negative human emotions; AWS could be used to exercise more conservative and less-lethal force; AWS could monitor human behavior and encourage accountability; and norms that apply to armed conflict do not require one side to undertake risk.

At Just Security, we have prepared a list of some of the key readings in this area, which represent a diversity of viewpoints on AWS and on a legal ban or regulation.


Note: Sarah Knuckey was a Senior Advisor to the UN Special Rapporteur on extrajudicial executions at time of the 2010 UN report on AWS.  She also provides independent legal advice on an ad hoc basis to the scientists’ organization (ICRAC) which released the call for a ban.