(Editor’s Note: This article is the first installment of the Symposium on AI Governance: Power, Justice, and the Limits of the Law).
This should be the golden age of AI governance. But words and actions are running in opposite directions.
Beginning around 2016, a proliferation of guides, frameworks, and principles sought to articulate ground rules for AI. The trend began in western technology companies, but quickly spread across sectors and around the world.
States soon got in on the act, with many adopting national AI strategies that nodded at AI governance or responsible AI; some, like the European Union and China, are now formalizing these policies into laws.
At the global level, in November 2021 UNESCO’s member states unanimously adopted a “recommendation” on the ethics of AI. AI also looms large on the agenda of the G7, the G20, the OECD, the OSCE, the UN, and the WEF.
For all this talk, more powerful applications continue to be released ever more quickly. Safety and security teams are being downsized or sidelined to bring AI products to market. And a significant portion of AI developers apparently believe there is a real risk that their work poses an existential threat to humanity.
This contradiction between statements and action can be attributed to three factors that undermine the prospects for meaningful governance of AI. The first is the shift of power from public to private hands, not only in deployment of AI products but in fundamental research and datasets also. The second is the wariness of most states about regulating the sector too aggressively, for fear that it might drive innovation elsewhere. The third is the dysfunction of global processes to manage collective action problems, epitomized by the climate crisis and now frustrating efforts to govern a technology that does not respect borders.
Resolving these challenges either requires rethinking these incentive structures — or waiting for a crisis that brings the need for regulation and coordination into sharper focus.
The Turn to Industry
AI is shifting economic and, increasingly, political power from public to private hands. That is now true throughout the lifecycle of AI, including fundamental research that has moved from universities to corporations, as well as deployment by private actors with minimal constraints by government.
A key driver is the rise of machine learning and increasing requirements for data, money, and raw computing power.
In 2014, most machine learning models were released by academic institutions; in 2022, of the dozens of significant models tracked by Stanford’s AI index, all but three were released by industry.
Private investment in AI in 2022 was eighteen times greater than in 2013. In 2021, the U.S. government allocated US$1.5 billion to non-defense academic research into AI; Google spent that much on DeepMind alone.
Talent has followed. The number of AI research faculty in universities has not risen significantly since 2006, while industry positions have grown eightfold. Two decades ago, only about twenty percent of graduates with a PhD in AI went to industry; today around seventy percent do.
The fact that pure as well as applied research is now being undertaken primarily within industry has had two consequences.
First, it is shortening the lead-time from investigation to application. That may be exciting in terms of the launch of new products — epitomized by ChatGPT reaching a hundred million users in less than two months. When combined with the downsizing in safety and security teams mentioned earlier, however, it suggests that those users are both beta-testers and guinea pigs.
Secondly, corporate actors are incentivized to focus on profitability. OpenAI, the company behind ChatGPT, began as a non-profit in 2015 with lofty statements as to how that status enabled it to “benefit of humanity as a whole, unconstrained by a need to generate financial return.” Just over three years later, the company announced that it was now following a “capped-profit” model, allowing it “to rapidly increase our investments in compute and talent.”
It is too early to judge what impact this will have on the application side of AI, but there are already suggestions that the emphasis will be on monetizing human attention and replacing human labor rather than augmenting human capacities.
The Hesitation of the State
States, meanwhile, are more wary of overregulating than underregulating AI.
With the notable exception of the European Union’s new legislative regime and episodic intervention by the Chinese government, most states have limited themselves to nudges and soft norms — or inaction.
This is a rational approach for smaller jurisdictions, necessarily rule-takers rather than rule-makers in a globalized environment.
Yet there are risks. Half a century ago, David Collingridge observed that any effort to control new technology faces a double bind. In the early stages of innovation, exercising control would be easy — but not enough is known about the potential harms to warrant slowing development. By the time those consequences are apparent, however, control has become costly and slow.
Most states focus on the first horn of the Collingridge dilemma: predicting and averting harms. In addition to conferences and workshops, research institutes have been established to evaluate the risks of AI, with some warning apocalyptically about the threat of general AI. If general AI truly poses an existential threat to humanity, this might lead to calls for restrictions, analogous to those on research into biological and chemical weapons, or a ban like that on human cloning.
It is telling, however, that no major jurisdiction has imposed a ban, either because the threat does not seem immediate or due to concerns that it would merely push that research elsewhere.
If regulation targets more immediate threats, of course, the pace of innovation means regulators must play an endless game of catch-up. Technology can change exponentially, while legal, social, and economic systems change incrementally.
Collingridge himself argued that, instead of trying to anticipate the risks, more promise lies in laying the groundwork to address the second aspect of the dilemma: ensuring that decisions about technology are flexible or reversible. This is also challenging, not least because it risks the “barn door” problem of attempting to shut it after the horse has bolted.
A further question is what form regulation should take and at what level it should tackle the problem. As I have argued elsewhere, most laws can govern most AI use cases most of the time. But, to fill the gaps, there are at least four possible answers.
The first approach is to repurpose existing data protection laws to address automated processing and certain aspects of AI. This was the initial approach in the European Union, from its 1995 Data Protection Directive to the General Data Protection Regulation of 2016, reflecting early use cases and concerns about AI, which was that it might misuse personal data or be used for inappropriate profiling based on personal characteristics.
The second is to take a sectoral approach, identifying potential harms and applying specific fixes to address those harms. This appears to be the approach most likely to gain traction, with examples of new laws being adopted to govern autonomous vehicles, fintech, and medical devices.
Thirdly, as we have seen in the more recent legislation adopted and proposed in the EU, it is possible to take an omnibus approach to regulating AI — with all the complications to which that gives rise. The EU describes its draft AI Act as “the world’s first comprehensive AI law”; the current draft defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.”
Finally, regulation could focus not on the use case, nor on the underlying software, but on the hardware itself. This idea of “regulating compute” is attractive in part because it is a physical thing that can be inspected — analogous to the way in which research into hazardous substances (biological, chemical, nuclear, and so on) requires accreditation and licensing of researchers and appropriate safety procedures and containment protocols on their workplaces.
At present, however, the most prominent way in which such a lens has been applied to AI governance is in the efforts by the United States to limit China’s access to high performance computer chips through export controls.
An International Artificial Intelligence Agency?
In the face of such governance challenges — states being weak relative to industry, and unable or unwilling to cooperate with one another — the obvious solution is some kind of global initiative to coordinate or lead a response.
In 2021, I posited the idea of a new agency based on the International Atomic Energy Agency (IAEA). Like nuclear energy, AI is a technology with enormous potential for good or ill; the grand bargain at the heart of the IAEA is the promise of sharing the beneficial applications of that technology widely — in exchange for a commitment not to weaponize it.
The equivalent weaponization of AI – either narrowly, through the development of autonomous weapons systems, or broadly, in the form of a superintelligence that might threaten humanity – is today beyond the capacity of most states. For weapons systems, at least, that technical gap will not last long. Much as the small number of nuclear armed states is due to the decision of states not to develop such weapons and a non-proliferation regime to verify this, limits on the dangerous application of AI will need to rely on the choices of states as well as on reliable enforcement mechanisms.
Clearly, it will be necessary to establish red lines to prohibit certain activities. Weaponized or uncontainable AI are the most obvious candidates. Mere reliance on industry self-restraint will not preserve such prohibitions. Moreover, if those red lines are to be enforced consistently and effectively then some measure of global coordination and cooperation is required. Here the analogy with nuclear weapons is most pertinent.
The idea of such an “International Artificial Intelligence Agency” has gained some traction in theory, with endorsement from academics as well as industry leaders like Sam Altman, and the Secretary-General of the United Nations itself.
In practice, of course, the barriers are enormous.
Nuclear energy refers to a well-defined set of processes related to specific materials that are unevenly distributed; AI is an amorphous term, and its applications are extremely wide. The IAEA’s grand bargain focused on weapons that are expensive to build and difficult to hide —the weaponization of AI promises to be neither.
More generally, the geopolitical tensions that are hindering national action can stymie international cooperation completely.
Perhaps the greatest problem, however, is that the structures of international organizations are ill-suited — and often vehemently opposed to — the direct participation of private sector actors.
If technology companies are the dominant actors in this space but cannot get a seat at the table, it is hard to see much progress being made. (On the other hand, some companies have operated through governments as a kind of proxy, which is arguably the definition of regulatory capture).
That leaves two possibilities: broaden the table or shrink the companies.
The World Economic Forum is betting on the former, with its AI Governance Alliance an example of a multi-stakeholder initiative that brings together industry, governments, academics, and civil society organizations. (Disclosure: I am an academic member.)
Yet, the latter — breaking up the tech companies — would be more in keeping with existing structures.
In the United States, the Justice Department is suing Google, while the Federal Trade Commission has ongoing actions against Amazon, having unsuccessful suits against Microsoft and Meta. In the European Union, in addition to ongoing efforts to limit the power of the tech giants, six “gatekeepers” were designated under the new Digital Markets Act, imposing stricter obligations and reporting requirements.
Only China, however, has successfully broken up tech companies in a purge lasting from 2020 to 2023 that saw trillions of dollars wiped off the share value of those companies, with Alibaba broken into six new entities — costs that Beijing was willing to bear, but at which Washington or Brussels might balk.
The Coming Crisis
The tragedy of AI governance is that those with the greatest leverage to regulate AI have the least interest in doing so, while those with the greatest interest have the least leverage.
Industry standards will be important for managing risk, but companies have every incentive to develop and deploy ever more powerful models with few guardrails in place. To the extent that the largest companies are calling for action by regulators, this is at least partly in the hope that friendly regulation will consolidate their position and raise costs for competitors.
Countries have the tools to regulate, but face invidious choices between overregulation that drives innovation elsewhere or risk exposing their populations to harm. Some, like the European Union and China, are large enough or single-minded enough to impose tough controls. Others, like the United States, act decisively only when presented with the zero-sum game of an AI arms race. Most are in wait-and-see mode.
The hypothetical International Artificial Intelligence Agency proposed here is one means of addressing these structural barriers to coordination and cooperation.
Perhaps the greatest flaw in that analogy is that the IAEA was negotiated when the effects of the nuclear blasts on Hiroshima and Nagasaki were still being felt.
There is no such threat from AI at present and no comparably visceral evidence of its destructive power. It is conceivable that such concerns are overblown, or that AI itself will help solve the problems raised here. If it does not, global institutions that might have prevented the first true AI emergency will need to be created swiftly to avert the next catastrophe.