A high-profile debate has been playing out in the media over the safe and responsible use of artificial intelligence (AI), kicked off by the Future of Life Institute’s “pause petition” calling for halting the most advanced AI systems. The petition had a wide-ranging focus on AI safety but was soon joined by other arguments with more specific concerns about worker protection, social inequality, the emergence of “God-like AI,” and the survival of the human race.

In response to concerns about AI safety, U.S. President Joe Biden met last month with the CEOs of frontier AI labs and Congress held hearings on AI in government and AI oversight. These conversations have been echoed around the world, with the United Kingdom planning to host the first global summit on AI this fall.

But as the world focuses more on regulation, it is important not to lose sight of the forest for the trees. AI poses different types of risks in the short and long term, and different stakeholders are best placed to mitigate existing problems that are exacerbated by AI, new problems that AI creates, and risks arising from uncontrollable AI systems.

Old Problems Made New with AI

Existing issues, such as data protection and privacy, worker exploitation, bias, and unchecked corporate power, will be exacerbated by AI. While AI perpetuates some problems, like bias, it also creates new ones, such as the development of previously unknown toxins. These issues are exemplified in the Distributed AI Research (DAIR) Institute’s letter, which states, “We should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power, and increasing social inequities.” The letter argues that concrete problems with AI that are already impacting everyday lives should be prioritized over future–and potentially overhyped–threats.

For many of these issues, the government is the most important stakeholder. These are primarily policy problems, not engineering ones, that are best addressed with existing government toolboxes of regulations, standards, and benchmarks. Governments around the world are painfully aware of this, with the White House releasing an AI Bill of Rights, Congress considering regulating generative AI, and the European Union developing an emerging regulatory framework for AI. However, much work on regulation remains to be done, and governments need to develop novel methods – and properly apply existing regulations – to govern the myriad of ways in which this technology impacts society.

AI Kills the Radio (and Video) Star

AI can also exacerbate existing risks to the point they become new categories of problems that society is unprepared for, such as deep fakes. While this technology does currently exist, AI could increase the quality and quantity of fake media so drastically that the line between reality and simulation blurs. It could also create situations of human obsolescence, where automation replaces so many workers that the social and fiscal fabric of entire economies must be rethought, or where creative pursuits become pointless in the face of generative AI. Disinformation and job losses to automation are perennial issues for societies, but AI has the potential to change their magnitude in ways we can barely imagine.

Unfortunately, these issues are much harder to plan for as their contours will not be clear until the technology emerges. For example, there is little use in creating a proposal to retrain truck drivers when their jobs disappear (as was once thought imminent) when it is actually artists who have the most to fear. Governments, however, must be ready to work with civil society partners on new issues as they arise, whether that is the creation of new worker retraining programs or novel methods of defeating disinformation.

Apocalypse Soon?

Haywire lethal autonomous weapons, or a poorly optimized financial model that causes economic ruin, are both examples of the final type of AI risk: a lack of control over powerful AI systems. Recent attention, however, has centered on a potential world-ending cataclysm created by a future superintelligent AI accident. AI apocalypses have been frequent topics of science fiction, but experts are now seriously contemplating this possibility. Artificial intelligence researcher Eliezer Yudkowsky, for example, argue that, without proper precautions, “literally everyone on Earth will die” due to artificial superintelligence that “does not care for us nor for sentient life in general.” Although this is often classified as a long-term risk, Yudkowsky and Geoffrey Hinton (formerly of Google) have both argued it is an urgent problem that should be prioritized.

Expanding private sector research on AI safety and alignment, as well as carefully creating and controlling new models, will have the largest effect on reducing the risk of an “AI apocalypse.” Here, the government’s role is to provide research funding and put controls in place to regulate AI models. AI researchers, for their part, must go beyond government regulators in developing models in safe and responsible ways.

The pause petition, DAIR Institute, and all the others who are part of this critical conversation raise genuinely important problems, and it is likely that they have introduced them to an audience unaware that these risks exist. But public consciousness alone will not make AI safe and stable, and debating over short- and long-term problems risks stymieing efforts to enact regulation across the board. Ultimately, the challenges the world faces with AI are not mutually exclusive – each has a specific set of stakeholders, implying that different groups must tackle different challenges simultaneously. As AI proliferation raises a myriad of anticipated and unanticipated challenges, the world must be prepared to tackle multiple risks at once.

IMAGE: Visual representation of artificial intelligence. (via Getty Images)