Editor’s note: This article, originally published on November 6, 2023, is now updated to include new expert analysis, noted in red as “New.”

On Oct. 30, the Biden administration issued a set of new policies regulating the development of new artificial intelligence technologies as well as their use by federal government agencies.  A sweeping “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” encompasses eight key priorities: national security, individual privacy, equity and civil rights, consumer protections, labor issues, AI innovation and U.S. competitiveness, international cooperation on AI policy, and AI skill and expertise within the federal government. The order was followed by a memo from the Office of Management and Budget (OMB) establishing “new agency requirements and guidance for AI governance, innovation, and risk management.”

Among its many provisions, the executive order (E.O.) contains a number of significant national security measures. It requires developers to notify the government when they train new large models and to report on the safety measures they take. It gives the Department of Homeland Security (DHS) a sprawling remit to address the potential for AI to create chemical, biological, radiological, and nuclear (CBRN) threats. It also gives DHS a mandate to establish an AI Safety and Security Board and work with the Department of Defense (DoD) to develop protections for critical cyber infrastructure. The order directs the National Security Council to develop guidelines for the safe, ethical, and effective use of AI by the U.S. military and intelligence agencies. And it charges the Department of Commerce with developing guidelines for authenticating and watermarking AI-generated content. 

These recent moves follow voluntary commitments that the White House secured from leading AI firms in July, as well as the Blueprint for an AI Bill of Rights, and the National Institute of Standards and Technology (NIST) AI Risk Management Framework. All of these efforts reflect the Biden administration’s desire to impose regulation on a rapidly advancing industry while it awaits further action by Congress.  

As the world continues to grapple with AI governance, Just Security asked top experts to reflect on what the E.O. and OMB memo mean for the future of AI and efforts to regulate it: 

NEW Bishop Garrison, Senior Fellow and Adjunct Professor, National Security Institute, the Scalia Law School, George Mason University

The Biden administration’s AI Executive Order is a welcome response to an ever-evolving technology landscape that has the potential of drastically reshaping our world. It remains to be seen if this directive will be the best-suited approach to tackle AI regulation given the agility and complexity of the issue. The OMB Implementation Plan goes to great lengths to create organizational infrastructure within federal agencies to ensure each has the capability to leverage AI solutions while minimizing any associated risk. These mitigation efforts focus on the development, use, and deployment of AI technologies. The plan also directs the creation of Chief AI Officers while noting the EO doesn’t supersede current law and policy. This is definitely well-intentioned, but could create yet another layer of bureaucracy to an already complicated issue when many existing offices maintain current authorities that can provide proper programmatic oversight. Yet another agency-level office with explicit responsibility on the topic could create unnecessary internal territorial disputes, further complicating the issue.

Additionally, carve out to address the national security apparatus with future guidance may ring hollow given the inherent rights and liberties concerns associated with operational use. These are the very same concerns the EO identifies as priorities to address. Particularly, as the Pentagon races to create autonomous drone swarms for use on the battlefield, it is difficult to argue the overall risk assessment profile as it relates to privacy, civil rights, and civil liberties for all isn’t at least increased. For transparency’s sake, guidance for national security agencies should have been released alongside – and perhaps even prior to – the domestic agency guidance with only the most sensitive directives redacted if necessary. Otherwise, the optics provide an inference that the needs of national security may outweigh the larger personal and humanitarian rights concerns the administration has already voiced.

The EO does, however, provide the first overarching U.S. regulatory framework to date. The previous absence of guidance has allowed a patchwork of state and local laws to cover gaps juxtaposed with international regulations for businesses to navigate. To that end, the largest issue with this executive order, as others have rightfully pointed out, is the fact that it is an executive order that lacks the same strength or resilience as legislation. The President’s implementation and enforcement of federal law is strongest when he or she acts in line with legislative action. Executive orders can be reserved, and Capitol Hill has yet to pass a comprehensive law on the subject. Just as the AI Bill of Rights was a welcomed first step in the right direction, this AI EO is ultimately another step up the nebulous staircase. It provides guidance designed to take advantage of the potential benefits of AI technologies while protecting against its potential harms. It does get the country closer to its destination but is still another part of a longer journey.

Paul M. Barrett, Deputy Director of the Center for Business and Human Rights at New York University’s Stern School for Business

There is a lot to like in President Biden’s sweeping executive order on artificial intelligence. At a high level of generality, it signals the administration’s determination to avoid the fateful mistake that Congress and the rest of the federal government made about social media companies. As the social media industry grew and consolidated in the 2010s, legislators and policymakers essentially accepted Silicon Valley’s marketing pitch that increasingly enormous platforms were devoted only to connecting people to family and friends and promoting free expression. Washington largely ignored the dark side of social media: the data hoarding, privacy violations, harm to children, harassment, and spread of misinformation. If the executive order is equivalent to a starter’s gun for a regulatory marathon focused on AI, that is to be applauded.

But the order’s ambition to have something to say about virtually every AI-related risk imaginable itself creates a policymaking risk — namely, a failure to prioritize action on AI dangers that are present in the here and now, as opposed to more speculative concerns about existential threats to the future of humankind. Borrowing from a report published in June by the NYU Stern Center for Business and Human Rights, where I work: “The best way to prepare for any potential existential threat from AI is for the tech industry, public officials, academics, and civil society organizations to address the risks right in front of us. We need rules for today’s AI technology that will mitigate immediate hazards and serve as a starting point for one day possibly having to deal with much more ominous dangers.” In shaping and promoting the order, the administration could have done a better job of sorting and prioritizing the dangers AI poses, identifying risks that demand attention immediately, such as the turbocharging of “deepfake” imagery, political misinformation, and a range of fraudulent activity that the Federal Trade Commission already has authority to investigate and stop.

Finally, the order reveals a built-in fallibility of unilateral White House action: The president may issue instructions to federal agencies, but those instructions are vulnerable to reversal by a successor and, in any event, lack the economy-wide breadth and enforcement teeth of well-crafted congressional legislation. The issuance of the executive order should not distract from the need for laws protecting online privacy, demanding greater transparency by currently opaque digital giants, and providing antitrust enforcers with more tools to keep those giants in check. Belatedly anxious about the malign influences of social media, Congress recently has considered proposals in all of those areas. The Biden administration needs to spur a revival of those legislative efforts, which if fine-tuned intelligently, have the potential to extend salutary government oversight over AI, social media, and other products and services from Silicon Valley.

Faiza Patel, Senior Director of the Brennan Center for Justice’s Liberty and National Security Program

 The President’s Executive Order on AI and OMB implementation guidance incorporate a number of principles and mechanisms to ensure that these tools are deployed safely and fairly. The OMB guidance includes an extensive list of AI uses that are presumed to be “rights-impacting” and prescribes fairly robust risk management practices for the federal government’s use of them. But carve outs for national security and the intelligence community create huge loopholes that the administration must urgently address.

The AI order exempts “AI used as a component of a national security system and for military and intelligence purposes,” creating a separate standard for these ventures, which have enormous and direct impacts on the lives of Americans and people around the globe. While national security considerations may require some adjustments to the processes set out in the order, its principles (which are drawn in large part from the 2022 Blueprint for an AI Bill of Rights) are surely at least as crucial for national security and intelligence programs. Surely, we want these programs to be safe, effective, and non-discriminatory. At a minimum, the order could have indicated that its principles—if not its specifics—would apply to national security and intelligence. Instead, it simply says that a forthcoming national security memorandum to be proposed to the president should provide guidance to the Department of Defense, the Intelligence Community, and other relevant agencies with a general reference to the need for risk management practices for rights-impacting AI.

Nor is it clear what exactly is being carved out. At a National Security Council briefing ahead of the release of the order, an official stated that the exemption covered the systems referenced in 44 U.S.C. 3552(a)(6) – i.e., information systems that involve intelligence activities or cryptologic activities relating to national security or are “critical to the direct fulfillment of military or intelligence missions,” as well those involving command and control of military forces and equipment integral to weapons. This list is not complete; other systems can also be similarly protected by statute or executive order.

Clarity on the scope of the national security exemption is urgently needed.

Julie Owono, Executive Director of Internet San Fronitères, Member of the Meta Oversight Board, and Affiliate at the Berkman Klein Center for Internet and Society at Harvard University

It’s as if the Biden administration is telling tech companies: “We’ve heard you, we’ll regulate you.” This E.O. kicks off a new era in the long history of the Internet: one where regulators don’t seem to be catching up with tech. We seem to be moving away from the “go fast and break things” attitude which existed through the social media era. 

This E.O. also shows a very ambitious administration that hopes to force innovation in areas tech companies have struggled with, for instance,  the  provenance of AI generated content. My hope is that President Biden’s E.O. creates a virtuous circle of innovation in which public and private sector actors strive to identify a comprehensive solution to AI provenance. 

Finally, we see an Administration signaling that yet again the United States will position itself as a leader on the global stage. The E.O. and the creation of the White House AI council  make it clear that Biden himself will oversee the delivery of this agenda. 

But there are important missing components: stakeholders from civil society and academia seem to have helped the Administration identify and understand risks. We are in this new era thanks to the vigilance of civil society and academia actors. The Biden administration should reward this vigilance by giving a bigger role to civil society actors in the development of solutions that make AI safe, trustworthy, and helpful for American society. 

Justin Hendrix, CEO and Editor of the Tech Policy Press

One area where the order is likely overly optimistic about technical solutions is with regard to media integrity. Watermarks, detection mechanisms, and other tools to determine whether media is synthetic or manipulated are never going to be reliable solutions at scale.

The document directs nearly the entire alphabet soup of federal agencies to do things on AI. But it is primarily a national security order. Additional analysis from Justin on the AI Executive Order has been published here

IMAGE: U.S. President Joe Biden and Vice President Kamala Harris arrive for an event about their administration’s approach to artificial intelligence in the East Room of the White House on October 30, 2023 in Washington, DC. President Biden issued a new executive order on artificial intelligence that day. (Photo by Chip Somodevilla/Getty Images)