US Needs to Stop Tiptoeing Around the “Killer Robots” Threat

When it comes to banning “killer robots,” the United States is going to take some convincing. That was one major take-away from April’s multilateral meeting on the matter where a US delegation joined 90 other nations at the United Nations in Geneva to discuss what to do about the development of “lethal autonomous weapons systems.”

In November 2012, the US became the first nation to articulate a detailed policy on killer robots, citing a long list of concerns and obstacles that would have to be overcome before developing and acquiring them. It has been careful, however, to stress that Department of Defense Directive 3000.09 “neither encourages nor prohibits the development” of future autonomous weapons systems.

Indeed, it appears that of all nations, the US is the farthest along in moving toward fully autonomous weapons. Last November, The New York Times reviewed several examples of missile systems with various degrees and forms of human control under development or in use by the US, Israel, Norway, and the UK.

Despite its investment in “semi-autonomous” weapons, the US has been one of the strongest supporters of international talks on questions relating to the emerging technology of lethal autonomous weapons systems held by the Convention on Conventional Weapons. The US participated actively in the meetings in May 2014 and April 2015. But the US’s eagerness to engage in talks about such weapons should fool no one into believing it supports a ban. At the discussions last month it was one of only two nations (the other was Israel) saying that the door should remain open for future development and acquisition of these weapons. 

While other nations meeting in Geneva focused on the many ethical, legal, technical, operational, societal, and proliferation concerns with autonomous weapons systems, the US and Israel were alone in speaking of the potential advantages or benefits of such weapons.

The US has said it will support more substantive, in-depth deliberation of key issues, including how to “ensure appropriate levels of human judgment over the use of force” but it sees a ban as “premature” at this time.

But as the Campaign to Stop Killer Robots notes, the concept of meaningful human control “is not about finding or building a ‘better’ or ‘safer’ autonomous weapon system but about drawing the line to prohibit any system that doesn’t come under human control.” To this end, the campaign hoped to see the talks in Geneva focus on the concept of meaningful human control and, flowing from this, a preemptive ban on fully autonomous weapons.

At the UN meeting, Canada, France and the UK explicitly stated that they will not pursue such autonomous weapons systems, but none of these states expressed support for taking the next logical step of supporting a preemptive international ban. Instead, they supported calls by the US and Germany for a future focus on transparency measures by the Convention on Conventional Weapons and discussion of “best practices” for national-level legal reviews of new weapon systems.

Such weapons reviews are undoubtedly helpful, but they are no substitute for an international regime controlling the use of autonomous weapons. States should not be relied upon to police themselves in this arena. As the International Committee of the Red Cross told the meeting, “efforts to encourage implementation of national legal reviews are not a substitute” for states considering “possible policy and other options at the international level to address the legal and ethical limits to autonomy in weapon systems.”

Meanwhile, greater transparency surrounding the development of such weapons could be useful in preventing the use of fully autonomous weapons. However, transparency is only one step. Even while welcoming “a more transparent discussion of how current semi-autonomous weapons systems, automated defense systems, and remotely-operated weapons systems are kept under meaningful human control,” the International Committee for Robot Arms Control noted that transparency is not on its own a sufficient means to regulate autonomous weapons.

In refusing to endorse a ban on killer robots, the US may once again be putting itself in a position of being overtaken by the rest of the world when it comes to dealing with a weapon that is unacceptable from a humanitarian perspective. Twenty years ago, the US became one of the first nations to sound the alarm about the dangers of another weapon, landmines. President Bill Clinton even called for their “eventual elimination.” But when it was time to move quickly toward a ban on the weapons, the US dragged its feet and fell behind the rest of the world for the better part of two decades, refusing to join the 1997 Mine Ban Treaty. Only last year did the Obama administration set the US objective of joining the treaty, acknowledging that it provides the best framework for achieving a world without antipersonnel landmines.

Nations can achieve international agreement on this matter of urgent concern, but not if they “go slow and aim low” as Human Rights Watch has warned. With the clock ticking on the rapid development of increasingly autonomous weapons systems, the US and all countries should not hesitate to take the path toward an outcome that results in a comprehensive, preemptive ban on autonomous weapons systems. 

About the Author(s)

Mary Wareham

Arms Division Advocacy Director at Human Rights Watch, Chair of the Cluster Munition Coalition, Editor of the Cluster Munition Monitor 2015