The first global summit to discuss the responsible development, deployment, and use of artificial intelligence (AI) for defense and military purposes took place in The Hague last month. The Summit on Responsible AI in the Military Domain (REAIM), which the Netherlands co-organized with South Korea from Feb. 15-16, gathered approximately 2000 representatives from across government, industry, academia, and civil society. Among them were 57 States – including all permanent members of the United Nations Security Council, except Russia, and other technologically leading States such as Japan and Germany – that agreed on a joint Call to Action. In addition, the United States presented a Political Declaration to advance States’ engagement on responsible AI, and South Korea announced that it plans to host a second REAIM Summit.

The REAIM Summit broadens international discussions, which currently focus on lethal autonomous weapon systems (LAWS), to more general military and defense-related applications of AI. It also expands understanding of “responsible AI” (RAI), which the United States and NATO frequently reference, to a wider community of States and launches an international political agenda. RAI has the potential to provide a new framework for addressing the related ethical and legal challenges of AI in the military domain at both the international and national levels. Yet it remains to be seen to what degree important actors in the field, such as China and States that aim to ban LAWS completely, such as Latin American and Caribbean States, South Africa, and Austria, will engage in this endeavor.

The Context

At the international level, the use of military AI has primarily been discussed at the “Group of Governmental Experts related to emerging technologies in the area of lethal autonomous weapons systems” (GGE on LAWS) as part of the Convention on Certain Conventional Weapons (CCW), which bans or restricts the use of specific types of weapons. Meeting regularly since 2016, the GGE offers a multilateral forum on the topic and led to 11 non-legally binding Guiding principles on the development and use of LAWS in 2019. Yet the GGE did not manage to significantly move closer to a global ban of LAWS despite pushes by several States and organizations. This raised the question if the multilateral discussions should move to a different forum that is not subject to the rule of consensus (by which a single State can block formal decisions), such as the United Nations General Assembly.

At the national level, States have made significant progress toward the governance of AI for defense and military purposes. The United States was the first to publish a set of Ethical Principles for AI in Defense in February 2020, addressing all AI applications for military use. This built off Directive 3000.09 (published in 2012 and updated in January 2023), which assigns responsibility for the development and use of autonomous and semi-autonomous weapon systems and establishes guidelines to minimize risks of failures in systems that could lead to unintended engagement. In this vein, the United States introduced the concept of RAI to the military domain and remains at the forefront of related regulation and implementation, including the publication of the RAI Implementation Pathway. Other States, such as the United Kingdom and France, have also adopted national policies on military AI.

In 2021, NATO adopted its Principles on Responsible Use as part of its AI Strategy. This laid the basis for an Autonomy Implementation Plan and the creation of a Data and Artificial Intelligence Review Board and a RAI certification standard. All 30 NATO allies have agreed to the principles, forming the first multinational policy framework on responsible AI. NATO’s newly launched efforts to implement and operationalize the principles aim to make them tangible and applicable across multiple States to establish common standards and State practice.

The First REAIM Summit

The Netherlands announced that it would organize the REAIM Summit and presented a roadmap at the U.N. General Assembly in 2022. To start initial discussions with stakeholders and prepare the substance of the Summit, the Netherlands then organized a series of regional consultations of States and exchanges with industry, followed by an interdisciplinary expert workshop.

The 2023 REAIM Summit engaged representatives of States, industry, academia, and civil society. It essentially contained two tracks: (1) a line-up of about 25 interactive panels over two days organized by different institutions which were open to the public (reflecting track 1.5 diplomacy that involves States and non-State actors), and (2) a closed meeting among States representatives – many at the ministerial level – on the second day (reflecting classical track 1 diplomacy between States). In essence, the track 1.5 raised broad awareness of the issue, gathered and connected different stakeholders, shared insights and discussed pressing themes, and explored future collaboration. The governmental track generated awareness at the political level and united states to engage and commit to the issue.

Track 1.5 Diplomacy: Open, Multistakeholder Debates on Military AI

Complemented by a special issue of the Ethics and Information Technology journal, the discussions in the open sessions offered a holistic and clear picture of the current debates on military AI and overarching tendencies. In general, the discussions focused on well-known but still unresolved issues, such as discussions on LAWS, including the presentation of related positions by the International Committee of the Red Cross (ICRC) and Amnesty International. Yet the general scope of AI applications and related challenges was broader, reflecting the field’s direction regarding research, policy, and technical developments. This included the discussion of topics that have received less attention so far, such as innovation, drone swarms, nuclear risks, and the mitigation of civilian harm.

In general, multiple discussions showed that the current thinking on responsible AI has shifted from the conceptual level to how to operationalize new policies and principles. The topic that was discussed the most was human-machine teaming and meaningful human control. The attention given to the issue as well as the multitude of views from different disciplines show that human centricity in the development and use of AI remains a primary concern. Yet the theme has moved from abstract ethical considerations and definitional debates to concrete questions regarding technical implementation.

Similarly, many discussions focused on the operationalization of responsible AI. The United States, NATO, and the United Nations Institute for Disarmament Research (UNIDIR) shared related experiences and insights. Industry representatives indicated two different approaches in this regard. The first approach is that industry engages in the challenges of responsible AI and tries to find appropriate solutions. Europe’s largest defense project Future Combat Air Systems (FCAS), for instance, does this in a transparent manner and by including civil society. It notably created an expert commission on the responsible use of technologies to provide systematic support and guidance during FCAS development. The second approach is to produce and sell in an apparent absence of transparency and concern for ethical dilemmas.

In this sense, the open debates also indicated divisions among stakeholders regarding the concept of responsible AI. Although many participants seemed to welcome the concept of responsible AI as valuable and necessary, others, such as Stuart Russell, have effectively questioned if the approach will not inevitably fail to prevent the ethically problematic use of military AI as well as its consequences.

Track 1 Diplomacy: Governmental Meeting, Call to Action, and Political Declaration

The second day of the Summit hosted a closed governmental meeting, and 80 States participated in the roundtable (with many at the ministerial level). Russia, however, had not been invited. The organizers thus achieved their goal of raising political awareness on RAI across the globe and got an initial engagement by a considerable number of States, including many that develop or are likely to use military AI, such as China, South Korea, and Australia. This led to several concrete outcomes.

The Dutch Minister of Foreign Affairs announced a joint Call to Action at the end of the Summit, which 57 States, including all permanent members of the U.N. Security Council, except Russia, and many technologically leading States, support. India, Brazil, South Africa, and others attended the Summit but did not endorse the Call. The Call recognizes the importance of several components of RAI and sets two goals. First, it requests continued efforts to engage in a global dialogue on RAI in the military domain, including dialogue with a diversity of stakeholders. Second, it encourages knowledge-building and the sharing of good practices. As such, the Call to action is relatively unspecific regarding concrete measures and prudent in terms of commitment. The Call has political significance, however, by officially uniting a considerable group of States on the issue of RAI, by committing them to keep this on the international agenda, and by encouraging national efforts in this regard.

The United States also presented a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The Declaration contains 12 specific guidelines for the responsible use of AI in the context of defense. The guidelines include several notions that are central to RAI in line with the U.S. and NATO principles, such as the requirement for the use of systems to comply with international law (including the necessity to conduct legal reviews), the understandability of systems, the integration of safeguards, and the necessity to maintain appropriate levels of human judgement over AI systems. The Declaration also contains a requirement that the development and deployment of high-consequence applications, such as weapons, should be done under the oversight of senior officials. Furthermore, it seeks to politically commit States to implement these practices, publicly describe their commitment, support others’ efforts, and further promote these practices internationally.

At this stage, the relationship between the Call to Action and the Political Declaration remains unclear. Neither document references the other. This indicates parallel – albeit potentially reinforcing – initiatives. It is obvious that the U.S. Declaration is more specific and ambitious as it sets guidelines for behavior and demands stronger commitment by States. In her keynote remarks, the U.S. Under Secretary for Arms Control and International Security, Ambassador Bonnie Denise Jenkins, said that the United States was inspired by the Summit and would use the occasion to launch the Declaration, which could become a focal point for international cooperation. While the Declaration would remain a U.S. document at this stage, it could be the beginning of a process in which the United States would continue to seek comments and input from other States.

Finally, South Korea announced that it will organize a second REAIM Summit. As such, responsible AI in the military domain will remain on the international agenda. The Dutch Minister of Foreign Affairs also announced the creation of a Global Commission on AI whose tasks will be to clarify how AI can be developed and deployed responsibly as well as determine conditions for effective AI governance.

Implications and Outlook

With the REAIM Summit, responsible AI in the military domain became a theme with global attention beyond the United States and NATO. Its innovative format of a multistakeholder conference was itself a step forward by engaging relevant actors in an open and transparent manner. This is crucial given the inherent ethical dilemmas of AI, citizens’ and soldiers’ sensitivity to its (moral) risks, and the role of (civilian) research institutions and industry in developingthese technologies. From a substantial perspective, the Summit was the first major event that opened the debate from a focus on LAWS to a broader range of military applications of AI. As such, the Summit set a new agenda, launched an international process on responsible AI, and started to engage States on the issue.

In concrete terms, the new agenda on responsible AI may lead to new industry standards, national regulations, and eventually legally or politically binding instruments of global governance. The nationalities of the organizers of the different discussion sessions and the prominent figures at the conference suggest, however, that the issue is largely driven by Western States, stakeholders, and partners. Other States’ positioning regarding RAI will thus be determinate for the future direction and impact of the process.

Notably China endorsed the Call to Action, but it remains uncertain whether it will it further engage in the process to help conceptualize and implement responsible AI, or seek alternative approaches to regulating military AI. It is similarly unclear whether the United States is willing to collaborate and agree with China on contentious issues in the context of its Political Declaration. Also unresolved is whether States that are invested in a ban of LAWS perceive the initiative as supporting the prevention of LAWS or as a risk to their goals. Brazil, South Africa, and Austria, for instance, did not endorse the Call to Action. The Stop Killer Robots campaign criticized the initiatives on RAI as lacking political vision and called the commitments weak.

After the REAIM Summit, Latin American and Caribbean States met at a regional Conference of Social and Humanitarian Impact of Autonomous Weapons in Costa Rica from Feb. 23-24, presenting a communiqué that calls for an international legally binding instrument that bans LAWS in line with the “Elements for a Future Normative Framework” presented by Brazil, Chile, and Mexico at the GGE on LAWS in 2021. While it does not explicitly take a stance on responsible AI, Human Rights Watch described it as starkly contrasting with the Dutch and U.S. initiatives.

States will position themselves on these issues and how to proceed in the coming months, notably at the meetings of the GGE on LAWS and at the First Committee of the U.N. General Assembly in September. It is certain that with the REAIM Summit, responsible AI in the military domain has gone global. It is likely that RAI will influence the future international discourse, policy, and lawmaking.

IMAGE: Participants stand on stage during the REAIM Summit, which took place from Feb. 15-16 in The Hague, Netherlands.(Photo by Netherlands Ministry of Foreign Affairs / Phil Nijhuis)