(In Part I of this post on UN talks on lethal autonomous weapons, I discussed how the underlying artificial intelligence that enables autonomous systems is improving rapidly. In this Part II, I will examine different policy approaches for dealing with this uncertainty.)

This week, countries are meeting at the United Nations to discuss lethal autonomous weapons and the line between human and machine decision-making. One complicating factor in these discussions is the rapid pace at which automation and artificial intelligence are advancing. Nations face a major challenge in setting policy on an emerging technology where the art of the possible is always changing.

When nations last met 18 months ago, DeepMind’s AlphaGo program had recently dethroned the top human player in the Chinese strategy game Go. AlphaGo reached superhuman levels of play by training on 30 million moves from human games, then playing against itself to improve even further. But that wasn’t enough. Just last month, DeepMind unveiled a new version called AlphaGo Zero that trained itself to play Go without any human input at all – only access to the board and the rules of the game. It defeated the 2016 version of AlphaGo 100 games to zero after only three days of self-play. This rapid pace of progress means nations face tremendous uncertainty about what might be possible with artificial intelligence even a few years into the future.

How should policymakers deal with this uncertainty?

One way to approach this problem is to shift the focus from the technology, which is constantly changing, to the human, who does not change. Rather than ask what technology we should or should not have, another way to frame the issue is to ask what role humans ought to play in war. 

Humanity’s long history of regulating weapons of war

A number of approaches have been proposed to answer the question of what to do about the rapid advancement of autonomy in weapons. The Campaign to Stop Killer Robots – a consortium of over 60 non-governmental organizations – has proposed that nations adopt a legally-binding treaty banning the development, production, and use of fully autonomous weapons. The aim of this approach is to stop the trend towards greater automation in weapons in its tracks before fully autonomous weapons can be built.

Others have argued for a different approach, trusting instead in the existing rules of international humanitarian law (IHL). Their rationale is that the concerns raised about autonomous weapons, such as civilian harm, are already covered under IHL, making a ban unnecessary at best and harmful at worst. In fact, some have argued a ban might restrict some uses of automation that could conceivably improve precision in war and save lives. In a post on Just Security, Charles J. Dunlap, Jr. argued that ad hoc restrictions on weapons were ill-conceived. Instead, he said nations should focus on their attention on “strict compliance” with IHL and not “demonize specific technologies.”

This is not the first time humanity has attempted to grapple with new weapons that were seen as problematic. The historical track record of such efforts is mixed. Efforts to ban the crossbow in the 12th century failed miserably, as did a short-lived attempt to restrict firearms in England in the 1500s. One of the most successful weapons bans ever, the Japanese relinquishment of firearms from 1607 to the mid 1800s, only succeeded because the ruling government had a monopoly on power in Japan and faced no external enemies.

The turn of the 20th century saw a flurry of attempts to regulate or ban new weapons that emerged out of the industrial revolution, including inflammable, exploding, or expanding bullets; air-delivered weapons; submarines; and poison gas. A challenge in all of these cases was that nations failed to foresee the specific way in which these technologies would evolve.

The 1868 St. Petersburg Declaration banned explosive or inflammable projectiles below 400 grams (roughly equivalent to a 30 mm shell) on the premise that they would “uselessly aggravate the sufferings of disabled men.” This prohibition failed to foresee tracer and grenade projectiles, both of which technically violate the 400-gram rule but are widely used by militaries today. In practice, nations have adhered to the spirit, if not the letter, of the 1868 Declaration by not employing rounds intended to explode inside the human body. The intent of the ban was successful, even though the specific technological restriction proved unworkable. In other cases, states have not proven as flexible in how they interpret regulations as technology evolves.

Expanding bullets, banned in 1899, have generally not been used by militaries to-date, even though they are legal for purchase for personal defense (in the United States). In many settings such as home defense expanding bullets are preferable not only because they are more effective but also because they are less likely to continue through the body and hit bystanders. The United States government has taken the position that expanding bullets are banned in war only if they are intended to cause superfluous injury, focusing on the intent behind the weapon rather than the specific functioning. Others have taken the position that they are banned outright. Either way, the ban has largely held even though technology evolved in an unexpected way.

Restrictions on poison gas, air-delivered weapons, and submarines, on the other hand, failed miserably. It is possible that such rules would never have been successful in any case, but the specific wording of the rules shows countries’ limitations in anticipating how new technologies develop. The 1907 Hague rules only prohibit poison gas attacks with projectiles, a loophole Germany exploited in defending its first successful gas attack at Ypres in 1915, which used canisters. The 1907 prohibition on attacks from the air “by whatever means” correctly anticipated the use of the airplane in war, which was first flown only a few years earlier in 1903. The restrictions only prohibited attacks against “undefended” cities, however, failing to anticipate the offense’s advantage in bombing campaigns and the reality that “the bomber will always get through.” Similarly, attempts at restricting submarine warfare failed to grapple with the reality that submarines were hopelessly vulnerable if they surfaced to comply with long-standing “prize rules” of maritime warfare. The practical realities of submarines led to militaries quickly abandoned these rules in warfare and shift to unrestricted submarine warfare.

One historical lesson is the challenge in setting limits that stand the test of time as technology evolves. There are some successful examples, though. Bans on biological weapons (1972), using the environment as a weapon (1976), and blinding lasers (1995) have withstood the test of time better, perhaps because they focus on the intended use of the weapon, rather than the specific technology. For example, the blinding laser ban prohibits laser weapons that are “specifically designed … to cause permanent blindness.” This is a different approach than the 1868 St. Petersburg declaration, in that it does not set a specific technological limit such as the power of the laser, but instead focuses on the intended harmful effect. The implication is that bans or regulations on emerging technologies are more likely to be successful when they focus on the intent of the weapon, rather than try to set technical restrictions when technology is in flux.

The reality that technology changes does not make it wrong to “demonize specific technologies,” though. In all of these cases, nations were right to be concerned about the harm that could come from an emerging technology. Some weapons are worse than others, whether because they cause unnecessary suffering or are more indiscriminate. Poison gas increased combatants suffering without making wars end any faster. Biological weapons would be inherently indiscriminate. Aerial bombing of cities led to hundreds of thousands of civilian deaths. History demonstrates quite clearly that the principles of IHL alone are not sufficient to protect us from the worst horrors of war. As wars progress, restraints come off and combatants can slide further into barbarism. On the rare occasions when combatants have been able to refrain from using certain weapons against each other in war, such as poison gas on the battlefields of World War II, the result was undoubtedly less suffering in war.

At the same time, history shows that stigmatizing a weapon or even an outright ban is not enough result in successful restraint. At the outset of World War II, European nations attempted to refrain from bombing one another’s civilian populations. Hitler even issued orders to the Luftwaffe not to execute “terror attacks” on British cities early in the war out of concern that Britain would retaliate. These rules quickly collapsed in wartime, however. After German bombers strayed from their military targets during a nighttime raid and accidentally bombed central London by mistake, Britain bombed Berlin. Hitler responded by launching the London Blitz.

For nations to effectively refrain from using certain weapons or tactics in war, a number of conditions must be met. The horribleness of the weapon must outweigh its military effectiveness. Nations must fear the consequences of using a weapon, such as reciprocal use by their enemy. And there must be a clear delineation between what actions or weapons are prohibited and which are permitted.

One of the reasons why European nations refrained from using poison gas on the battlefields of World War II, while efforts to restrict bombing cities collapsed, is that bombing was permitted in some cases (against military targets) but not others (civilians). Mistakes and miscalculation in the fog of war could cause one category to bleed into another. Poison gas, on the other hand, was not used on the battlefield at all. (Germany used gas during the Holocaust and Japan used gas in small amounts in China, which did not have chemical weapons.) It seems reasonable to assume that if combatants had started using poison gas against one another in one setting but not others, its use would have quickly expanded. Limits are easiest when they are clear and simple.

Nations have struggled to figure out, if any, limits might make sense when it comes to incorporating autonomy into weapons. It is possible that weapons that could search for, decide to engage, and engage targets on their own could be more precise than humans and reduce civilian casualties. On the other hand, it is also possible that they would increase the scale of accidents, dangerously accelerate crises, or lead to a lack of human responsibility for killing. (It is even possible they could do both.) On the whole, though, it is hard to see how moving to a new era of warfare where killing proceeds unchecked by human hands would be beneficial to humanity.

Moving the focus away from technology and back to the human

To-date, lethal force decisions in war have been made by humans because there was no other way to fight. But technology is allowing humans to outsource more and more of the engagement cycle to machines. One could, of course, examine the state of the technology today and make a determination about which tasks are best done by people and which by machines. The risk in such an approach is that it could quickly be rendered moot by technological change. It could prohibit tools that turn out not to be particularly harmful or it could fail to prohibit dangerous uses of technology. An alternative is to ask: If we had all of the technology that we could imagine, what role would we still want humans to play in lethal force decisions? What, if any, decisions require uniquely human judgment in war, and why?

Over the past few years, there has been a growing interest in taking this sort of approach to autonomous weapons. Many have called for nations to adopt a principle of ensuring “meaningful human control” in war. Others have expressed this sentiment using different terms, such as “appropriate levels of human judgment” or “appropriate human involvement.” The specific labels are less important, especially since none of these terms are defined, but there is significant value in countries discussing what role humans ought to play in lethal force decisions. Instead of quibbling semantics over terms that aren’t defined, it would be more fruitful for nations to try to understand the substance of the concepts behind these terms.

There are many things machines cannot do. Today, they cannot understand the context for their actions, although that may change over time. As of today, they cannot weigh competing values and apply judgment, although that too may change someday. Machines are less likely, however, to match humans’ ability to empathize with others, feel morally responsible for their actions, and weigh the value of a human life. And even if we could offload these burdens to machines, would we want to? What sort of people would we be if there was no one to bear the burden of war, no one who felt responsible for killing, and no one who slept uneasy at night? What would war become without human involvement? In some ways, perhaps better. Humans commit atrocities and war crimes. On the other hand, humans also can exercise empathy and mercy, which can provide a check on the worst horrors of war. For the first time in human history, we have a choice about what role humans play in lethal force decisions in war. Machines can do many things, but they cannot answer these questions for us.