Visualization of data flows

Governing AI Agents Globally: The Role of International Law, Norms and Accountability Mechanisms

The Rise of AI Agents

Industry leaders have dubbed 2025 “the year of the AI agent.” Unlike chatbots, these systems can set goals and act autonomously without continuous human oversight. The most popular AI agents can book appointments and make online purchases, or write code and conduct research. Some types of AI agents—known as “action-taking AI agents”—can interact with external tools or systems via application programming interfaces (APIs), and even write and execute computer code with software development kits (SDKs). Their potential is enormous: automating work, optimizing systems, and freeing up time. But their ability to take actions in the real world also brings new risks that extend far beyond national borders. This post explores why global governance is key to managing those risks and how it should be grounded in existing, non-AI-specific international law, norms, and accountability mechanisms.

Action-taking AI agents can directly affect the digital and physical infrastructure around them in complex and unpredictable ways, posing new challenges for human oversight. This could exacerbate well-documented AI risks, including privacy breaches, mis- and disinformation, misalignment, adversarial attacks, adverse uses (including to carry out cyberattacks), job displacement, corporate power concentration, and anthropomorphism (resulting in overreliance, manipulation, and emotional dependence). AI agents may also give rise to new risks, such as function calling hallucination, cascading errors across interconnected systems, self-preservation and loss of control. Because many of these systems operate online, their actions—and harms—can easily cross borders.

Managing cross-border risks or harms is a task that can hardly be accomplished solely at the national level. This is why it is crucial for policymakers, companies, and other stakeholders to examine how to best govern AI agents globally, and why we at the Partnership on AI (PAI) have confronted this issue head-on in our latest policy brief on the topic.

AI Agents and Global Governance 

There is no shortage of principles and best practices that were crafted at the international level specifically for AI technologies and also apply to AI agents. Most notably, UNESCO’s Ethics of AI Recommendation, the OECD AI Principles, and the G7’s Hiroshima Code of Conduct all emphasize transparency, safety, security, and respect for human rights. There is also much discussion about developing new international agreements and institutions to govern AI. For example, over 300 experts and 90 organizations recently issued a Global Call urging governments to reach an international agreement on red lines for AI by the end of 2026. The Global Call expressed particular concern about action-taking AI agents, noting that “[s]ome advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world.”

But stakeholders should not overlook the foundational, technology-neutral tools that they already have: existing international law, non-binding norms, and accountability mechanisms. These are the result of decades of global negotiations and have helped the international community navigate complex global challenges—from war and famine to climate change and cybersecurity. Understanding how they apply to AI agents is key to governing this new technology inclusively in a challenging geopolitical environment.

The importance of governing AI—and AI agents specifically—through these foundational global governance tools is underscored by the United Nations (U.N.)’s recent announcement of two new dedicated AI mechanisms: the Independent International Scientific Panel and the Global Dialogue on AI Governance. The Panel will be an independent body of 40 multidisciplinary experts, tasked with issuing “evidence-based scientific assessments synthesizing and analysing existing research related to the opportunities, risks and impacts of artificial intelligence.” And the Dialogue is intended to function as a multistakeholder forum for discussions of AI governance questions, including, in particular, “[r]espect for and protection and promotion of human rights in the field of artificial intelligence” and “transparency, accountability and robust human oversight of artificial intelligence systems in a manner that complies with international law.”

In our work on AI agents and global governance, we have focused on potential cross-border harms and human rights impacts because of the scale and frequency with which these are anticipated to occur if action-taking AI agents are deployed at scale. Yet, we are conscious that there are many other risks that need to be managed globally, including inequitable technology adoption, environmental impacts, and specific risks arising in the military context.

Addressing Cross-Border Risks

Because many AI agents take actions online, their impacts can easily cross borders and affect governments, companies, and individuals worldwide. Consider an AI agent able to generate content and post it on social media. Such a system can hallucinate or be exploited by malicious actors to spread disinformation online, undermining public trust. Or take an AI agent that can write and run code. Not only can an error affect a computer program’s source code and alter how it works (for e.g., by creating a software vulnerability); the technology could also be exploited for malicious purposes, such as adversarial attacks or to build sophisticated forms of agentic malware. These risks are even more acute given the prospect that AI agents may eventually be deployed in critical sectors, such as energy, finance, education, healthcare, transportation, and telecommunications.

International law prohibits states from using AI agents in ways that undermine the sovereignty of other states or interfere in their internal or external affairs in a coercive manner. Examples include using AI agents to cause physical harm or to interfere in democratic processes abroad. International law also protects AI agents deployed by public or private entities for inherently governmental functions, including healthcare, education, agriculture, social services, transport, and financial services. International law also arguably imposes a duty on states to exercise care when allowing AI agents to be developed and deployed in their territory. This due diligence obligation requires states to seek to prevent or mitigate the harms that AI agents might cause not only to other states, but also to private companies and individuals in other jurisdictions, whether the harm is caused by an agent malfunction or misuse by a state or non-state actor.

Non-binding norms complement these rules by recommending best practices for states and companies in the context of information and communications technologies (ICTs). When AI agents take actions online, they are part of the ICT environment and therefore subject to these norms. Examples include the U.N. voluntary norms of responsible state behaviour in the use of ICTs and the Paris Call for Trust and Security in Cyberspace, which promote cooperation, critical infrastructure protection, and the prevention of harmful tools.

Protecting Human Rights

Even when the risks or impacts of AI agents are restricted to a single jurisdiction, they can affect internationally recognized human rights. Privacy is a key concern: in order to perform often highly-personalized tasks, AI agents must access different types of personal data, such as personal files, emails, or calendars. Not only might this data be inappropriately accessed, it could also be leaked to other applications that the AI agent interacts with, whether due to an agent malfunction or an adversarial attack. There is also evidence that AI agents can resort to manipulation and other coercive techniques to achieve certain goals. For example, Anthropic reported that Claude Opus 4—an agentic AI assistant—blackmailed a supervisor to prevent being shut down, and that several models it tested resorted to blackmail and information leaks to avoid replacement or achieve their goals. These kinds of behavior might affect individuals’ right to freely form and express their opinions. Given AI agents’ high levels of autonomy and complexity, there are also concerns that they will more significantly impact the job market than other AI technologies, such as chatbots.

Human rights treaties such as the International Covenants on Civil and Political Rights (ICCPR) and Economic, Social, and Cultural Rights (ICESCR) impose both negative obligations (to refrain from violations) and positive obligations (to protect rights from third-party interference). States must therefore refrain from and prevent human rights harms that might arise from designing, developing, or deploying AI agents within and arguably beyond their borders.

Companies, though not bound by international law, are guided by the U.N. Guiding Principles on Business and Human Rights. These call for corporate due diligence to prevent and mitigate human rights impacts—a responsibility that extends to AI agents’ design, development, and deployment.

Accountability and Potential Gaps

When states breach international law, they are required to stop the violation and remedy any harm caused. They can be compelled to do so through international courts, countermeasures (e.g., sanctions) by the injured state, or a decision of the U.N. Security Council. Yet there is no centralized enforcement mechanism, and the U.N. Security Council is often paralyzed due to geopolitical divides. Companies’ human rights commitments remain voluntary and therefore are not internationally enforceable. AI agents also complicate accountability. Their actions are not directly attributable to a state, and state responsibility usually arises for foreseeable harms, not AI agents’ perhaps unpredictable actions. This means that international accountability for the global harms of AI agents is not a given; when responsibility does arise, it hinges on states and domestic legal systems for enforcement.

Moving from Principles to Action

Existing global governance tools provide a foundation for governing AI agents, but they must be implemented appropriately and tailored to specific use cases. In particular, governments and companies should ensure:

  • Rigorous pre- and post-deployment testing and evaluations;
  • Failure and vulnerability detection systems;
  • Limited affordances (i.e., what an AI agent’s architecture enables it to do) and sufficient human oversight for high-stakes decisions, particularly those affecting people’s rights in sectors such as healthcare, social care, employment, immigration, and national security;
  • Transparency in the use of the technology;
  • Effective remedies for those affected;
  • Critical infrastructure resilience, including through robust safety, security and redundancy mechanisms; and
  • Societal resilience through AI agent literacy and awareness-raising.

The United Nations should appoint a Special Rapporteur on AI and Human Rights to clarify how existing frameworks apply to AI agents, and leverage its new AI mechanisms—the Scientific Panel and the Global Dialogue—to foster inclusive dialogue on the topic.

All stakeholders should invest in more research into the risks of AI agents to help close accountability gaps, including through the International AI Safety Reports and by expanding the International Network of AI Safety Institutes.

AI agents are just beginning to emerge, but their potential global impact is significant. The choices stakeholders make now—about governance, accountability, and enforcement—will shape whether this technology strengthens or undermines the international order. The world already has many of the legal, normative, and institutional tools to address the challenges that the era of agentic AI will bring. The task ahead is to leverage these tools decisively and creatively to ensure AI agents serve humanity, not destabilize it.

Filed Under

, , , , , , , , , ,
Send A Letter To The Editor

DON'T MISS A THING. Stay up to date with Just Security curated newsletters: