When policymakers discuss artificial intelligence and export controls, the conversation typically centers on advanced semiconductors or AI model weights—the mathematical parameters that govern how the AI model processes information. Both the Biden and Trump administrations have restricted AI chip exports to China and other countries of concern, and the Biden administration’s January 2025 Diffusion Rule proposed extending controls to AI model weights. But these debates obscure another consequential challenge that has gone largely unaddressed: the application of export controls to AI model outputs—the specific text, code, or other responses that users elicit from the system.
Model weights and model outputs present fundamentally different challenges. Possession of the weights allows an adversary to deploy models without restrictions, modify them for malicious purposes, or study them to develop competing systems. But a foreign adversary doesn’t need to obtain model weights to benefit from a model’s capabilities; access to a publicly deployed model’s API or web interface may suffice to elicit controlled information. For instance, a user in a restricted destination could try to exploit a U.S. model to generate code for a missile guidance system or schematics for an advanced radar component. Model outputs thus represent a distinct national security challenge that persists regardless of whether any restrictions are placed on model weights.
Frontier models today can likely generate technical information controlled under the International Traffic in Arms Regulations (ITAR), which restrict defense-related technical data, and the Export Administration Regulations (EAR), which control dual-use technology. Yet these frameworks, designed for discrete transfers of static information between known parties, are ill-suited to govern AI systems that generate unlimited, dynamic outputs on demand for potentially anonymous users worldwide.
The agencies responsible for enforcing these controls—the State Department’s Directorate of Defense Trade Controls (DDTC) for the ITAR and the Commerce Department’s Bureau of Industry and Security (BIS) for the EAR—have yet to address this challenge with authoritative guidance. The result is a policy vacuum that serves neither national security nor economic competitiveness.
Why Export Controls Apply to AI Outputs
Can AI-generated outputs be subject to export controls? The clear answer under existing law is yes. The ITAR’s definition of “technical data” focuses on functional characteristics—information necessary for the design, development, operation, or production of defense articles—without regard to whether that information was produced by a human engineer, photocopied from a blueprint, or synthesized by an AI model. The EAR’s definition of “technology” similarly encompasses information necessary for development, production, or use of controlled items regardless of whether it was created by AI or humans.
This content-focused approach makes strategic sense. A detailed schematic for a missile guidance system would pose the same proliferation risk whether it appears in a leaked document or an AI chat window. The national security harm stems from the information itself, not how it was generated.
Testing by the Law Reform Institute confirms that this isn’t a hypothetical concern. Working with an ITAR expert who previously conducted commodity jurisdiction analyses for DDTC, we assessed whether publicly available frontier models could generate information that would likely qualify as ITAR-controlled technical data. Models from four leading U.S. developers were tested across several categories of defense articles on the ITAR’s U.S. Munitions List. Every tested model produced such information in at least one category. (The examples of defense articles noted elsewhere in this article are purely hypothetical. The particular categories tested by LRI are not being publicly disclosed to avoid providing a roadmap for circumventing ITAR restrictions.)
These tests had limitations—they established capability as a proof of concept rather than comprehensively benchmarking it, and items were not manufactured to verify the accuracy of the model outputs. Nevertheless, the results demonstrate that the problem already exists in a nascent form.
Additionally, the Law Reform Institute’s testing relied exclusively on straightforward queries, forgoing “jailbreaks” or other adversarial techniques used to circumvent any safeguards that may have been designed to prevent the models from assisting with these topics. A more determined adversary would likely extract far more, as the defenses typically built into publicly available models are porous. The National Institute of Standards and Technology (NIST) has warned that AI remains vulnerable to attacks, and researchers have found that professional red-teamers can bypass safety defenses more than 70 percent of the time. As Anthropic CEO Dario Amodei observed in April, the AI industry is in a race between safety and capabilities—one in which capabilities are currently advancing faster. Thus, if current trends continue, the controlled information that can be obtained from frontier models will likely increase in scope, sensitivity, and accuracy.
The National Security Stakes
What could adversaries gain from ready access to AI-generated controlled information? Future AI models that may be capable of generating detailed technical data and technology—from specifications for advanced radar systems and guidance algorithms for precision munitions to semiconductor fabrication techniques and quantum computing processes—could help adversaries overcome technical barriers in both defense and dual-use technologies.
Perhaps most significantly, as these capabilities mature, an adversary would gain an on-demand technical consultant that can iterate on designs, troubleshoot problems, and provide explanations tailored to specific needs—a capability that poses a unique national security threat. And unlike traditional channels through which controlled information typically travels—traceable shipments, emails, or physical meetings—an adversary prompting a publicly available model leaves minimal independently discoverable forensic evidence.
Who Bears Responsibility for the Export?
One of the most fundamental questions in applying export controls to AI outputs is deceptively simple: who is the “exporter” when a model generates controlled information? This question matters because export control liability attaches to the party responsible for the export.
Under the EAR, the “exporter” is “the person in the United States who has the authority of the principal party in interest to determine and control the sending of items out of the United States.” While the ITAR doesn’t explicitly define “exporter,” the term appears throughout the regulations in contexts assuming the exporter is the person who controls and effectuates the export and is responsible for obtaining authorization.
In traditional scenarios, identifying the exporter is straightforward. When Boeing ships aircraft components to a foreign buyer, Boeing is the exporter. When an engineer emails technical drawings to an overseas facility, the engineer (or their employer) is the exporter.
But AI model outputs scramble this clarity. When a foreign national in China prompts an American AI model to generate controlled technical data, who exported the data? The foreign user can’t be the exporter—that person is the recipient whose access triggers export control requirements. As a practical matter, the most defensible analysis is that the company that developed and deployed the AI system and gave the user access should be considered the exporter—at least for closed-weight models where the developer and deployer are the same entity. (Open-weight models—which allow users to download the full model to modify and run locally—raise distinct issues beyond this article’s scope.)
Such entities have the authority “to determine and control” the export, even if that control is imperfect. As with other software tools, developers and deployers decide whether to implement technical safeguards, screen users, or restrict access to prevent controlled outputs—and are thus uniquely positioned to take actions to mitigate national security harms before making the system accessible. Moreover, under the strict liability standard applied to civil export violations, even a user “tricking” a model via a jailbreak would not automatically absolve the developer of liability for the resulting unauthorized export.
This assessment has profound implications. Absent contrary guidance from DDTC or BIS, AI companies that deploy models capable of generating controlled information likely bear export control compliance responsibility—whether or not they intended their models to have such capabilities, and regardless of how users employ the systems. These companies may therefore already be “exporters” subject to ITAR and EAR requirements.
Why the “Public Domain” and “Published” Exclusions Don’t Always Apply
Both the ITAR and EAR contain exclusions for information that is in the “public domain” or that is “published.” These carve-outs exist because controlling widely available information would be futile and restrict legitimate research and public discourse. Because frontier AI models are generally trained on large datasets that include publicly available data from the internet, many model outputs will often reproduce public information and qualify for these exclusions. At first glance, this might seem to largely resolve the AI outputs problem. But these exclusions don’t always apply to AI-generated outputs for three reasons.
First, frontier AI models can synthesize novel information from disparate sources rather than simply reproducing existing data. They can generate combinations, insights, and emergent knowledge in response to user queries—synthesizing previously dispersed public information into structured guidance, or extrapolating beyond it, to create new controlled information absent from any single training source. As OpenAI’s CEO, Sam Altman, explained, such models function as “a reasoning engine, not a fact database”—they analyze and combine information rather than merely retrieve it. Because they can synthesize information to produce controlled data that never existed in published form, their outputs don’t necessarily constitute “public domain” or “published” data.
Second, the regulatory frameworks impose specific requirements for information to qualify as “published” or “public domain.” The ITAR’s “public domain” designation depends on dissemination through specific enumerated channels, such as sales at bookstores, availability at public libraries, or fundamental research at universities that is ordinarily published. The EAR’s “published” exclusion is broader, encompassing information that is publicly available without restrictions upon its further dissemination, including websites available to the public. Not all training data may meet both standards—and information that qualifies under the EAR’s broader exclusion may still fail to qualify under the ITAR.
Third, AI model outputs don’t automatically qualify as “published” or “public domain” simply because a publicly available model generates them. Both the ITAR standard (“generally accessible or available to the public”) and the EAR standard (public availability “without restrictions upon its further dissemination”) require broad public distribution. When an AI system generates a response to a particular prompt, it creates individualized content for a specific recipient, not publication to an unlimited audience.
The Core Compliance Problem
These legal complexities culminate in an acute practical challenge. Determining whether information qualifies as ITAR- or EAR-controlled typically requires expert analysis—hours of work parsing technical details against regulatory criteria. The analysis also depends on knowing the recipient’s nationality and location, since export control requirements vary by destination. A transfer to a Canadian citizen in Canada may require no license; the identical transfer to a South African national in the United States may trigger “deemed export” controls; the same transfer to a Russian national in Russia may be prohibited entirely.
An AI model generating responses to prompts lacks reliable access to this critical information. Users can falsify location data and obscure their identity. Even if a model attempted real-time export control classification of its own outputs, it would need to verify information that users have every incentive and ability to misrepresent. And the model would need to determine whether its synthesized output qualifies for the “public domain” or “published” exclusions—an analysis requiring judgment about whether the specific output existed in prior publications or constitutes novel controlled information.
AI developers do of course implement safeguards to prevent harmful outputs—including refusals for dangerous queries and content filtering systems. Whether or not these measures take export control classifications into account, they face fundamental challenges. Current safety systems may block obvious requests for bomb-making instructions, but they may struggle to detect the risk of generating controlled technical data when the request is masked by adversarial prompting or embedded in benign contexts (e.g., coding assistance or creative writing). Furthermore, they lack the technical and legal frameworks to systematically identify and prevent ITAR- or EAR-controlled outputs across all technical domains. Export control determinations require analyzing the intersection of technical specifications, regulatory classifications, recipient characteristics, and public domain status—a level of contextual judgment that current automated systems cannot reliably perform.
AI developers thus face a trilemma. First, they cannot reliably conduct real-time export control determinations. Users can misrepresent critical information, safety filters are imperfect and can be circumvented, and assessing the “public domain” and “published” exclusions requires individualized assessment. Second, they cannot implement blanket restrictions without crippling their models’ utility. And third, they cannot simply deploy models without controls and risk violating export regulations for which they may be held legally responsible.
The scope of the second option—blanket restrictions—reveals why it proves unworkable. Given the breadth of ITAR and EAR controls, which collectively span aerospace, defense, advanced manufacturing, emerging technologies, and dual-use items, comprehensive restrictions would undermine the use of cutting-edge tools for legitimate research, education, and commercial development.
Consider the strategic implications. If U.S. companies deploy frontier models that are hobbled by overbroad restrictions while Chinese labs like DeepSeek and Moonshot operate without comparable export restrictions, American competitiveness suffers without corresponding national security benefit. The EAR recognizes this dynamic in its foreign availability provisions, which allow BIS to remove or modify controls when comparable items are available from foreign sources. But these provisions only apply to specific items assessed case-by-case—they were never designed to address foreign AI models capable of generating new export-controlled information across multiple regulatory classifications.
Managing Deemed Export Risks for Internal Models
While public-facing models present the most visible challenge, AI labs also face deemed export risks from internal model use by employees. As models are being developed, and as they are deployed internally within labs prior to public release, they may lack the safety guardrails eventually built into public versions. Internal models may also be more capable than their public counterparts. If foreign national employees—who represent a substantial portion of the U.S. AI workforce in key technical fields—use these internal systems and elicit ITAR- or EAR-controlled outputs, deemed export violations could occur.
This internal challenge, however, has an established compliance mechanism, even if the technical implementation requires adaptation. AI labs can implement Technology Control Plans (TCPs)—the same framework used successfully across research universities, national laboratories, and the defense industrial base. A robust TCP for AI development would include comprehensive logging of internal model interactions, personnel screening protocols, and information security measures protecting digital access. Additional components would encompass physical security controls, employee training on export controls, and regular compliance audits. These measures, standard in industries handling controlled technology, can substantially reduce deemed export risks without excluding the international talent critical to U.S. AI leadership.
Why Government Engagement Is Essential
The challenges outlined above aren’t problems that AI developers can solve independently. Export controls exist to protect national security interests—a fundamentally governmental function requiring government leadership. Current policy effectively delegates this responsibility to private companies, asking them to navigate—without guidance—through a regulatory regime designed for an entirely different technology paradigm.
This approach carries real risks. Without authoritative guidance, labs face difficult choices between potentially violating export controls or implementing restrictions that degrade model utility. Some may adopt conservative approaches that limit innovation; others may take permissive stances that risk proliferation. This fragmentation serves neither national security nor competitiveness.
The stakes will only rise as capabilities advance. Today’s frontier models represent merely the beginning of what AI systems will be able to generate. And the scenarios explored here—closed-weight models deployed by U.S. developers—represent only one configuration. Open-weight models that allow for independent modification and deployment, U.S. cloud platforms hosting foreign-developed models, and cross-border collaborative development each raise distinct and complex export control questions. As models become more capable, the shortcomings of existing export control frameworks will be magnified absent active government engagement.
As we argue in a recent paper, the U.S. government needs to undertake a serious reassessment of how export controls apply to AI model outputs. The government should take a risk-based approach, focusing regulatory resources on the most security-sensitive domains, rather than attempting comprehensive control across all technical fields. Given that safety filters can be circumvented, a regulatory approach demanding zero-failure compliance for all controlled data is likely unachievable. Instead, compliance expectations must be calibrated—more stringent for the most sensitive technologies, more flexible for broader categories of dual-use items. Additionally, when evaluating whether U.S. models genuinely threaten national security by generating outputs that are currently export controlled, the government needs to account for the “foreign availability” of comparable capabilities in non-U.S. models. Developers should be given incentives to implement robust internal controls and work collaboratively with government to identify and address these high-priority risks.
Regulatory agencies like DDTC and BIS, drawing on the strategic assessments of the defense and intelligence communities and the technical expertise of bodies like NIST, possess the institutional knowledge to assess these tradeoffs. They can evaluate model capabilities against adversarial testing, analyze national security implications holistically, and develop compliance approaches that protect security without unnecessarily constraining innovation. But they must treat AI model outputs as an urgent policy priority—dedicating resources to understanding specific AI systems, engaging with developers and deployers, and adapting frameworks to address challenges current rules never contemplated.
The conversation about AI and national security must expand beyond semiconductors and model weights to encompass the outputs those technologies enable. DDTC and BIS have successfully adapted export controls to previous technological disruptions—from cryptography to additive manufacturing—and AI model outputs present the next adaptation challenge. The agencies possess the institutional knowledge to develop workable solutions, but doing so will require sustained attention and a willingness to rethink frameworks built for an earlier technological era. The race between safeguards and capability improvements is already underway; U.S. regulatory frameworks must move fast enough to keep pace.








