(Editor’s Note: This article is the fifth installment of the Symposium on AI Governance: Power, Justice, and the Limits of the Law).
Artificial Intelligence (AI) is not new. It has been around at least since the 1950s, when Alan Turing laid out the theoretical framework for AI and the first “intelligent” problem-solving computer programs were created. Over the past two decades, AI – and machine learning specifically – has powered a vast number of digital technologies, including search engines, recommendation algorithms, drones, self-driving cars, and facial recognition systems. But old debates about AI have resurfaced and new ones have emerged with the recent rise in popularity and accessibility of novel chatbots, image generators and other large-language models – also known as “generative AI.” These models may lead to societal and generational leapfrogs like no other AI-powered technology has before. Their unprecedented human-like capabilities, versatility, and ubiquity have fed both hype and fear.
The potential benefits of AI are enormous, ranging from helping the world tackle complex challenges such as climate change and serious diseases to improving efficiency in the workplace. But the risks are just as great, including the use of AI to amplify disinformation, power cyberattacks, and further entrench bias and injustice, not to mention apocalyptic claims that AI will surpass human intelligence. These debates – some of which are hyperbolic and binary – are all taking place against the background of fierce geopolitical competition, where AI is prized by all yet concentrated in the hands of few, and amid the dizzying pace of the technology.
At the heart of those debates is a fundamental dilemma: how to harness AI’s enormous potential for good while minimizing its risks and ensuring equitable access to the technology? In our view, this delicate balance can only be struck with proper AI governance at the national, regional, and global levels. Crucially, respect for international law should be the starting point.
AI Governance as a Global Challenge
Like other digital technologies, AI knows no geographical boundaries: it affects the lives of human beings and the fabric of societies around the globe in fundamental ways. The impact of AI can range from an individual’s credit score or social media feed to the development of weapons and the shaping of the global information environment. Thus, AI governance is not just a corporate endeavor but the business of all States. Likewise, figuring out how to best govern AI is no easy task. It is a complex exercise that requires concerted action by diverse stakeholders with different cultural, social, political, and expert backgrounds. This means that States and other relevant actors – including private companies, civil society, and academia – need to work together. International law provides them with a tried and tested common language from which AI governance can be developed at a global scale.
For example, international human rights law recognizes a catalogue of fundamental rights and freedoms that have been agreed upon by all States in universal instruments such as the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, the International Covenant on Economic, Social and Cultural Rights, as well as customary international law. The exact scope of each right varies across different States, given their diverse social, cultural, and political contexts. Nevertheless, international human rights law reflects a minimum common denominator that can serve as a benchmark for States and other stakeholders when considering how to protect human rights without stifling technological innovation.
Navigating Uncertainty and Competition
The risks and actual impact of AI technologies on different sectors of society are still unknown and evolving rapidly. Yet corporations and States alike – especially big players like the United States, United Kingdom, European Union, China and India – are on a non-stop race to develop and acquire AI technologies. The stakes are very high. By laying out the rules of the road, international law can help bring clarity, predictability, trust and confidence among States and other stakeholders in this uncertain, fast developing, and competitive environment.
For example, standards of due diligence may inform what behaviour is expected from States and corporations to prevent or mitigate AI harms. Likewise, the principle of non-intervention requires States not to interfere in the internal or external affairs of other States, including by using AI to influence electoral processes or orchestrate malicious cyber operations against critical sectors. Like any legal system, international law is not fool proof. Its enforcement mechanisms are notoriously scarce. But international rules and principles freely agreed upon by States provide the baseline for responsible State and corporate behaviour in the digital environment. They have helped to bring a level of peace, security, and prosperity unseen before the post-World War II period. This is why they are still worth following in the age of AI – both online and offline.
International Law Already Applies to AI
International law is not just a good policy idea. It already applies to the use of AI technologies by States and, to a certain extent, individuals and corporations. States are bound by a multitude of treaties, customary international law, and general principles of law. Corporations have a social responsibility to respect human rights. And individuals must not commit international crimes.
Crucially, international law is technology neutral. This means that its rules and principles apply to old and new technologies. For example, the International Court of Justice affirmed that the prohibition on the use of force and international humanitarian law apply to all kinds of weapons, irrespective of the technology behind them. This is not to say that AI is a weapon and should be regulated as such, as some have suggested. AI’s enormous potential can be used for good or evil. The point is that the technology has multiple uses and applications, by States and other actors, such that international law applies whenever relevant to the behaviour in question.
It is true that certain international rules apply to specific goods or technologies, such as radio broadcasting. However, for the most part, international rules and principles are sufficiently general or flexible to accommodate new technological developments. This is about interpreting existing rules to new societal phenomena – what we call “evolutionary interpretation.” Many of those rules were developed at a different time and with different phenomena in mind. Thus, international law needs to be – and is likely capable of being – adapted to AI through a careful understanding of the technology’s unique features, including its versatility, speed, and scale. This necessitates multi-stakeholder engagement and multidisciplinary (if not interdisciplinary) engagement. Far from a weakness, international law’s generality can ensure that it stands the test of time in the face of AI’s rapid development.
That international law already applies to AI also means that an international treaty for AI is not a given. AI does not exist in a legal vacuum and, as noted earlier, general protections and prohibitions under international law are still relevant. Before deciding whether or not a treaty is needed, States must better understand what the existing international legal framework looks like when applied to AI. When considering the pros and cons of such a treaty, relevant questions include: Is the existing legal framework sufficient? Does it leave gaps in the protection of certain values or groups, especially vulnerable persons? Is it adequate to address the new challenges and risks raised by AI? Does it need more granularity to achieve the right balance? Is there a suitable existing forum to have global discussions on AI? How can diversity of thought be meaningfully built into AI negotiations from the outset, including next-generation, female leaders, and Global South perspectives? Treaties take a lot of time, political will, and effort, and may be easily outpaced by the development of technology. The risk is also that in an attempt to reach a consensus, existing legal standards are watered down for AI.
It is also important to note that similar discussions on the application of international law to cyber operations are already underway within and outside the United Nations from which lessons might be drawn. At the United Nations, there exists, for example, the Open-Ended Working Group on the security of and in the use of information and communications technologies (the OEWG) and the Ad Committee on Cyber Crime and the Global Digital Compact. Beyond the United Nations, the Tallinn Manuals on International Law Applicable to Cyber Operations and the Oxford Process on International Law Protections in Cyberspace, for instance, have helped flesh out the contours of international law applicable to cyber operations. Several States have also published statements or national positions clarifying how they think different rules or principles of international law apply in this context. Thanks in part to these efforts, significant progress has been made on the application of international in cyberspace. Discussions about AI can be both integrated with cyber (where commonalities or overlap exists, such as the use of AI in cyber security) as well as draw from those successes in the cyber context.
Beyond the question of how international law applies to AI, there is a separate question of which processes, forums, or institutions are appropriate to apply and enforce those rules or develop new ones. This is for States to decide at the domestic, regional, and international levels. However, when considering this question, States should bear in mind that the mandates of different international institutions already cover different uses or applications of AI by States or non-State actors, such as the United Nations and its international organs, including human rights bodies, as well as international courts and tribunals and other dispute settlement mechanisms. International law also provides for unilateral remedies for AI-based wrongs, such as UN Security Council, unilateral sanctions and countermeasures. And if new institutions or processes are devised for AI governance, States may consider drawing inspiration from existing international bodies that have been set up to deal with important technological challenges, such as climate change and civil aviation. They should also consider the need for technical expertise, multi-stakeholder engagement and institutional coordination, and which risks can and should be addressed.
* * *
International law has a central role to play in AI governance. It provides States with a common vocabulary as well as greater clarity, predictability, and confidence in addressing this global and complex challenge. International rules and principles already apply to AI technologies: they can be interpreted to accommodate different uses and applications of AI technology by different actors around the globe. The task of interpreting international law in the AI context is not easy and requires a collective effort that brings together different stakeholders and areas of expertise. At this stage, there are more questions than answers. However, given the rapid pace of AI development and the risk of outpacing any regulation, it is clear that AI governance should be flexible and dynamic, covering all stages of AI development. While this flexibility is inherent in the generality of international law, States and other stakeholders still need to think collectively about how it can be applied in practice, including through existing or new processes, forums, or institutions. This is where the conversation about AI global government should be going.