Book Synopsis: Governing Lethal Behavior in Autonomous Robots
Ronald C. Arkin | Governing Lethal Behavior in Autonomous Robots (Chapman and Hall: 2009)
by Jiou Park, a 2013 graduate of New York University School of Law
Dr. Ronald C. Arkin is a roboticist, roboethicist, and a Regents’ Professor and Associate Dean for Research and Space Planning at the School of Interactive Computing within the College of Computing at the Georgia Institute of Technology. Arkin is an expert in behavior-based reactive control and action-oriented perception for mobile robots and unmanned aerial vehicles, hybrid deliberative/reactive software architectures, robot survivability, multiagent robotic systems, biorobotics, human-robot interaction, robot ethics, and learning in autonomous systems. Through his research, Arkin has been highly active in military robotics, receiving robotics research funding from the Department of Defense, including the Defense Advances Research Projects Agency, the U.S. Army’s Research and Development Programs, and the U.S. Navy’s Research and Development Programs.
In his third book, Governing Lethal Behavior in Autonomous Robots, Arkin draws largely on research conducted since 2006 under a contract with the Department of Defense. The objective of his research was to create an “artificial conscience” for military robots. If successfully created, this “artificial conscience” would allow military robots to behave “ethically” – for example, they would refrain from using lethal force against children or non-combatants and even be able to comply with the laws of war in real battlefield situations. Drawing from the results he has obtained from this research, Arkin hypothesizes that not only is it possible to create military robots capable of behaving ethically, but that intelligent robots can behave more ethically than human soldiers on the battlefield.
Arkin first describes two reasons why it is necessary to develop ethical military robots. First, there is an unmistakable and irreversible trend toward greater autonomy in weapon systems. Arkin points to a number of existing unmanned weapons systems, ranging from ground robots such as Packbots to air units such as the Reaper, commonly known as “drones,” and also cites military and technology experts asserting that the trend toward autonomous military robots is accelerating. According to Arkin, there is a significant possibility that robots with the capacity to identify and engage targets without human supervision will be operating side-by-side with human soldiers within the next twenty to thirty years.
Second, Arkin argues that in addition to having the potential to solve many of the problems related to human soldiers, robot soldiers may also be able to perform better than human soldiers. For example, Arkin argues that soldiers are prone to behavior that results in atrocities due to emotional and psychological factors and are vulnerable to psychological injuries. Moreover, Arkin refers to studies that have found the general reluctance of human soldiers to “shoot to kill” problematic for effective battlefield performance. Thus, according to Arkin, military robots have the potential to behave not only more ethically but also more effectively on the battlefield compared to human soldiers.
However, whether lethal robots with an “artificial conscience” capable of behaving “more humanely than humans” could ever actually come into existence is a separate question. Arkin devotes the second half of Governing Lethal Behavior in Autonomous Robots to proving that it would be possible to develop such “ethical robots.” Arkin focuses on how a military robot’s programming would work to ensure ethical behavior, starting from a hypothetical situation where all other necessary technologies are present.
An autonomous robot decides how to act through a “behavioral mapping” which translates specific sensory inputs, such as what the robot sees or hears, into specific actions, like shooting or moving away from an object. According to Arkin, the very basic way to embed ethical behavior into robots is to impose a set of constraints on the behavioral mappings. The set of constraints, Arkin says, would be derived from laws of war (including the principles of necessity, humanity, proportionality, and discrimination), the rules of engagement, and any other applicable rules for peace enforcement missions, depending on the context. As a result, upon encountering a certain sensory input, the robot will only be able to take an action that does not violate the constraints programmed into its behavioral mappings. The ultimate goal is to ensure, through these constraints, that only actions complying with laws of war and rules of engagement will occur.
Arkin presents four different architectural choices to achieve that goal. The first is the “ethical governor,” which reviews the robot’s action prior to its enactment. The second is an “ethical behavioral control,” which ensures that any action the robot can select is ethical in the first place. In other words, the “ethical governor” will act as a reviewer once the behavior is selected, while the “ethical behavioral control” will act as a constraining principle prior to the selection of behavior. The third is “the ethical adaptor,” which reviews the robot’s action after the fact and updates the robot’s ethical constraints accordingly. The fourth and final component is the “responsibility advisor,” which makes it possible to assign responsibility to a human agent when the robot acts in an unethical way.
Arkin believes that by using a combination of the above architectural designs, it is possible to strike a balance between the robot’s ability to execute missions effectively and absolute compliance with laws of war. To facilitate this result, Arkin presents a basic protocol that a robot will have to follow: (i) prior to engagement, confirm that specific people have accepted responsibility for the robot’s actions; (ii) ensure that the mission at hand complies with the principle of necessity; (iii) maximize discrimination between enemy combatants and non-combatants; and (iv) use the minimum force required. According to Arkin, by following this protocol and ensuring that all other constraints derived from rules of engagement and laws of war are programmed into the robot, “ethical” military robots will be able to avoid atrocities and sustain fewer non-combatant casualties than human soldiers.
Although Arkin acknowledges that embedding ethics into a robot is a daunting task, he argues that battlefield ethics can be more easily embedded into machines than non-battlefield ethics. His argument relies on two basic rules as a starting point: (i) such engagement must be obligated under the rules of engagement, and (ii) no conflicts exist with any laws of war. From this starting point more sophisticated programs can be developed. In light of the vast strides that must be made in order to bring such ethical military robots into existence, Arkin closes his book with a chapter dedicated to demonstrating the feasibility of his proposal using a simple test-program.
As Arkin readily admits, it is impossible to tell whether ethical military robots will ever come into existence. In this light, Governing Lethal Behavior in Autonomous Robots can be read as Arkin’s argument for why we should think about developing military robots that can maintain – if not maximize – compliance with the laws of war and rules of engagement. His book is intended to provide at least a modest starting point for such discussions from the perspective of a long-time military roboticist.
Philip Alston on Arkin’s Governing Lethal Behavior in Autonomous Robots:
Lethal autonomous robotic weapons are certain to play a major part in future warfare, and are even likely to be used by law enforcement agencies in some situations. But the very idea that machines rather than humans will take the decision to kill particular individuals raises major ethical questions, not to mention fundamental concerns in relation to international humanitarian law and human rights law. Many proponents rely primarily on an assertion that reliance on robots will reduce battlefield casualties on all sides, thus justifying the use of robotic decision-makers. But in this pathbreaking analysis Ronald Arkin lays out a complex ethical system that he claims will not just promote but will actually enhance existing protections for the laws of war. No other researcher has made the case for ethical autonomy in unmanned lethal systems in as systematic or scientific fashion, nor has any proponent of lethal robotics placed “the plight of the non-combatant” at the heart of such a justificatory enterprise. Because of its combination of ethical, legal, and technical considerations it is an indispensable reference point for both proponents and opponents of these developments. But Arkin’s insistence that his analysis is preliminary, subject to major revision, and not conclusive, cannot obscure the fact that many years of US Defense Department funding of his research, both before and since the publication of this volume, have made a vital contribution to assuaging the consciences of those who are determined to minimize “our” casualties and maximize “theirs.” The notion that the laws of war can be reduced to programmable formulae and the idea that the human conscience can be mechanically replicated are both far more problematic than Arkin’s work would suggest.
Sarah Knuckey on Arkin’s Governing Lethal Behavior in Autonomous Robots:
Arkin’s Governing Lethal Behavior is foundational for the study of the ethics and legality of autonomous weapons. Based on his decades of research as a scientist in the field of military robotics, including numerous US Department of Defense supported projects, Arkin predicts the eventual deployment of autonomous robots in the battlefield. Crucially, he anticipates that autonomous weapons may be able to perform ethically better than humans. Driven by awareness of the extent of atrocities committed during war, troubling surveys of the battlefield ethics of US soldiers, and a genuine desire to reduce war crimes, Arkin’s work is particularly important for those who are skeptical of or opposed to the development of autonomous weapons on grounds of harm to civilians.
Although Arkin treats seriously the long-term nature of his “ethical governor” project, and is attentive to the difficulty of translating legal rules – often highly contextual and subjective – into clear programming for robot fighters, some readers will find his text overly optimistic about the possibility of technological fixes for war crimes. For lawyers, some of his writing on the potential to program the laws of war may also read as overly optimistic, particularly in light of the last decade of heated debates about the meaning and application of core legal concepts in armed conflict, such as distinction and direct participation in hostilities. Yet Arkin’s early research is a path-breaking contribution to debates about the practical possibility and theoretical desirability of autonomous weapons systems, and future research will draw heavily from it, seeking either to vindicate or contest his research and the larger military and weapons manufacturer drive towards autonomy of which it is a part.
Further Reading
- G.A. Bekey, “Review: Governing lethal behavior in autonomous robots,” Computing Reviews (October 26, 2009) Very positive (blurb)
- Vik Kanwar, “Post-Human Humanitarian Law: The Law of War in the Age of Robotic Weapons,” 2 Harvard Nat’l Sec. J. 616 (2011) Somewhat positive (blurb)