The U.S. Capitol is seen after the House narrowly passed a bill forwarding President Donald Trump's agenda at the U.S. Capitol on May 22, 2025 in Washington, DC. (Photo by Kevin Dietsch/Getty Images)

AI Governance Needs Federalism, Not a Federally Imposed Moratorium

On May 22, the U.S. House of Representatives passed a budget proposal including  a ten-year moratorium on state and local regulation of AI. The proposal aims to nullify dozens of existing state AI laws and block states from enacting new ones. Congress should reject the proposed “AI preemption moratorium.” It is bad policy and is likely unconstitutional under the Tenth Amendment.

Proponents of the moratorium point to the fragmented patchwork of state AI laws as justification, claiming that preempting state regulation will spur innovation and help the United States outpace China. But this argument rests on a false dichotomy between regulation and innovation. Regulation can drive innovation by establishing clear rules, building public trust, and encouraging adoption. If the United States wants to lead in AI, it must do so by upholding its democratic values and building systems people trust—not by sidelining the institutions best positioned to govern responsibly.

Congress has yet to meaningfully regulate AI in the private sector and is unlikely to do so in the near future. Meanwhile, states across the country have stepped into the vacuum—enacting laws aimed at promoting transparency, accountability, and consumer protection across critical domains including education, employment, housing, healthcare, and more. These efforts reflect not only public demand but the practical need to tailor governance to specific risks and contexts. There are no ready-made blueprints for regulating a general-purpose technology like AI. That is why state-level experimentation—rooted in democratic legitimacy and policy flexibility—remains essential. The moratorium would halt this progress at precisely the moment when we need it most.

Who Decides AI Regulation? 

The pivot to preemption transforms first-order questions of how to regulate AI to second-order questions of who decides. The stakes are high: whoever decides AI regulation will determine the content, scope, and timing of whatever policies emerge. Congress clearly has the power to regulate AI. But for Congress to say, in effect, “we choose not to regulate, and we won’t let states either,” is an unusual—and likely unconstitutional—assertion of national power.

The Tenth Amendment reserves to the states all powers not delegated to the federal government. The anti-commandeering doctrine protects this constitutional balance by forbidding Congress from commanding state governments to enact or refrain from enacting laws. Congress has broad commerce power and can regulate private actors in ways that preempt contrary state laws under the Supremacy Clause of the Constitution. However, Congress cannot directly regulate state institutions—the Tenth Amendment forbids it. 

The Supreme Court’s 2018 decision in Murphy v. NCAA reinforced these principles. The federal statute at issue—PASPA—prohibited states from legalizing sports gambling, which the Court held was an unconstitutional commandeering. PASPA’s fatal flaw was the absence of any regulation of private action. It neither conferred federal rights on individuals or entities desiring to conduct sports gambling operations nor imposed any federal restrictions on private actors. Thus, there was “no way to understand the provision prohibiting state authorization as anything other than a direct command to the States,” which was an unconstitutional commandeering.

In so holding, the Court distinguished commandeering from permissible “deregulatory preemption. It cited the Airline Deregulation Act (ADA) as an example of the latter, which prohibits states from enacting or enforcing laws “relating to rates, routes, or services of any [covered] air carrier.” Crucially, this prohibition operated within a comprehensive federal scheme governing the airline industry. In that context, the Court interpreted the ADA to confer a limited “federal right” for airline carriers to set rates, routes, and services without state interference. Thus, unlike PASPA, the ADA satisfied the preemption predicate of regulating private activity.   

Distinguishing between permissible deregulatory preemption and unconstitutional commandeering can sometimes prove difficult. However, the proposed AI moratorium is not a close case. Its text does not even purport to regulate private conduct. And no federal framework governs AI development and deployment in private markets.  To comply with the Tenth Amendment, Congress must affirmatively regulate private action, which can take the form of an explicit right to engage in certain activity. As  Murphy made clear, the federal right must be more than an incidental consequence of a prohibition on state action. Otherwise, the distinction between preemption and commandeering would collapse. 

The anti-commandeering doctrine is more than a drafting exercise. It provides a constitutional check on Congress’s legislative power that safeguards liberty, promotes government accountability, and forces political deliberation. Achieving congressional consensus on AI regulation will prove difficult. Yet that is partly the point. Congress cannot compensate for its inability or unwillingness to regulate AI by silencing states that can.  

Significant national interests cannot override this constitutional principle. The anti-commandeering doctrine establishes a categorical bar, not a balancing test. Even mild intrusions on state sovereignty violate constitutional limits. The proposed moratorium obliterates them. Rather than targeting specific activities in particular sectors, the moratorium attaches to AI technology itself across all applications and social contexts. This intrusion into traditional state police powers is unprecedented. It would eliminate states’ ability to protect citizens in areas including consumer protection, public health and safety, civil rights, education, law enforcement, labor, and employment. The moratorium might even prevent states from adapting  generally applicable laws, like negligence and consumer fraud, to account for AI’s unique regulatory challenges.

One might expect that a law stripping all states of regulatory power in a prominent field for a decade would meet resistance from state governments and their representatives. This is one of the political safeguards of federalism: state representatives in Congress can block federal laws that would preempt state law or negotiate for more favorable terms to protect state interests. In addition to these political checks, however, the anti-commandeering principle provides a constitutional backstop. Put otherwise, the Tenth Amendment prohibits Congress from commandeering state governments, even if a majority of state representatives vote in favor.

A Blow to Federalism

Beyond constitutional problems, the proposed moratorium puts democratic discourse on AI at risk. Federalism serves not merely to protect state sovereignty but to foster democratic representation and policy experimentation. Given profound uncertainties about AI’s impacts, states’ ability to test regulatory approaches without committing the entire nation provides crucial benefits. 

These state-level experiments generate vital empirical data about what works, what fails, and what requires refinement. State regulation is often a compromised solution that reflects multiple and competing interests. Stakeholders are rarely fully satisfied. But the negotiated outcomes provide information about what stakeholders can live with, even if not their first preference. 

States also catalyze robust policy debate and public engagement that might not occur in a fully centralized system. The political imbroglio around California’s SB 1047 is a case in point. Had it been enacted, SB 1047 would have imposed risk mitigation requirements, state oversight, and liability in the event of catastrophic harm incurred in the state. SB 1047 forced debates about risk allocation, accountability, and regulatory design. 

While staged in California, SB 1047 sparked a wide-ranging national debate that informed national AI policy discourse. Unlike in Congress, the real possibility that SB 1047 might become California law was a mask-off moment. We learned, for example, that industry giants like OpenAI, Microsoft, and Google vehemently oppose regulation of their frontier AI models, despite expressing support for government oversight in congressional hearings and public relations campaigns. 

Moreover, the decentralized nature of American federalism is more resilient to regulatory capture and institutional failure. Whether in red states or blue states, political venues will always be available to advance policy preferences that diverge from national approaches. Precisely because of deep ideological differences, federalist AI policy may appeal to both progressives and conservatives alike. Centralized national AI policy has surface appeal in terms of establishing regulatory coherence, but achieving it requires a broad consensus about fundamental values and priorities–a consensus that does not currently exist when it comes to AI. Federalism provides a mechanism and the institutional structures for that necessary discourse.

Building Smarter AI Governance

While not without its drawbacks, federalism offers what a fully centralized system does not: the ability to engage in policy innovation, widespread political participation, and iterative adaptation. All are essential for effective AI governance. The proposed AI moratorium is antithetical to these values. 

Prohibiting state legislatures from addressing AI-related challenges for a decade would hobble the values of federalism precisely at a time when government oversight is most needed. It is highly questionable that Congress, after ten years of state inaction, will suddenly possess the optimal wisdom to craft comprehensive and effective AI rules. Even a shorter moratorium would be problematic. Policy decisions that are made—and not made—within the next two years will create path dependencies and trajectories that will echo for decades. Federalism ensures that no single institution can decide for all the rest. 

Instead of shutting out the states, Congress can achieve regulatory cohesion in ways that respect constitutional limits and harness the positive potential of states.  For instance, Congress could adopt a cooperative federalism framework, establishing federal baseline standards while allowing states room to experiment within federal parameters that promote national and local values simultaneously. This approach has been used effectively in areas from environmental law to healthcare, and it would preserve both national coordination and local innovation. Congress may also use its spending power to incentivize state compliance with federal AI goals, so long as the conditions are non-coercive. 

If the United States is to lead in AI, it must do so in a way that reflects its constitutional commitments and political ideals, not by abandoning them. True AI progress comes not from regulatory paralysis but from building AI systems that people trust, and that can be deployed responsibly across diverse contexts and populations.

Whatever Congress decides, it cannot be—and should not be—a blanket moratorium on state AI regulation. The great genius of our federalism is not its efficiency, but rather its adaptability, resiliency, and pluralist capacity. The transformative potential demands thoughtful governance structures, not shortcuts. 

Filed Under

, , , , , ,
Send A Letter To The Editor

DON'T MISS A THING. Stay up to date with Just Security curated newsletters: