In July, both the United States and China put forward their national visions for AI development and governance through their own AI Action Plans. Washington’s plan leans into the rhetoric of AI dominance and transactional dealmaking to advance U.S. national interests. In stark contrast, Beijing has pitched the world on a vision of AI governance that opposes U.S. hegemony, supports multilateralism, and embraces global capacity building in its Global AI Governance Action Plan.
Yet beneath the surface, the two countries’ AI strategies are converging in strikingly similar directions. Both now pursue the same three-pronged approach: accelerating domestic AI adoption, enabling government-supported AI exports and the open-source ecosystem, and managing AI risks without constraining development.
This convergence is new. U.S. AI policy has pivoted since President Donald Trump took office, more vocally supporting the open-source ecosystem and AI exports while downplaying AI safety. Chinese AI policy has also transformed, though over a longer time horizon: its AI policy over the last two years has moved away from the heavy-handed ideological measures. It now has sufficient technological capabilities to usefully deploy the technology in its economy and globally and has begun to slowly increase its discussion of frontier AI risks in key policy documents.
This strategic alignment will fundamentally shape global AI competition, turning it from an ideological confrontation into a race for domestic productivity gains and global technological influence. In that race, it will be the capacity to deliver on the AI Action Plans—rather than their ideological visions—which will determine which superpower shapes the future of AI.
The AI Action Plans in Context
The two AI Action Plans emerged from markedly different institutional contexts, which shape both their authority and their strategic messaging. The U.S. AI Action Plan, published directly by the White House, carries a more direct connection to the executive branch and represents a continuation of Washington’s “America First” approach to technological leadership. Beijing’s Global AI Action Plan, while issued through the Shanghai World AI Conference rather than the Chinese government directly, reflects China’s current strategy of positioning itself as a champion of multilateralism and global technological cooperation—a framing that has crystallized in direct opposition to U.S. export controls and unilateral dominance narratives.
The more explicit connection to U.S. government policy, along with more detailed content, underscores that the U.S. Action Plan will play a more direct and prescriptive role in AI policy than China’s counterpart. China’s Global AI Governance Action Plan, instead, is of note more as an indicator of the broader trajectory of Chinese AI policy than as a novel prescription for new central planning in Chinese AI development and governance.
A close reading of both plans nevertheless reveals growing U.S.-China convergence in three core aspects of AI policy: accelerating domestic AI adoption, promoting global diffusion and standard setting, and managing AI risks without constraining development.
Domestic Acceleration of AI
First, both governments now recognize their global AI ambitions rest not only on their leading AI companies, but also on the diffusion of AI throughout many other industries. As a result, both strategies highlight efforts to accelerate AI adoption. Economic diffusion may sound like an obvious goal, but it actually represents something of a pivot for both countries, albeit for different reasons. Much of the Biden administration’s policy was grounded in the belief that scaling the capabilities of frontier AI models was strategically critical. Much of the administration’s export control policy on AI chips, for example, rested on the presumption that AI capabilities could soon reach a point of rapid self-improvement. Concrete efforts to drive AI adoption in specific industries, while a priority in the public sector, were less strongly emphasized.
China, by contrast, only relatively recently began leveraging AI as a tool for economic growth after several years of imposing tech crackdowns and strict regulatory measures that crippled innovation.During and immediately after the COVID lockdowns, Chinese tech policy focused largely on ensuring that technologies furthered the ideological interests of the Chinese Communist Party. Ideology and content control still matters for Beijing, but it has also loosened some of these measures to kickstart the economy and reverse its drying venture capital ecosystem. As a result, Chinese AI policy has shifted toward an adoption-first strategy that prioritizes broad domestic use in the so-called “real economy.” And the growing maturity of China’s AI ecosystem—exemplified by not just DeepSeek’s rise but by frontier models from other companies like Alibaba and Moonshot—now gives China the tools to pursue this goal.
According to China’s Global AI Governance Action Plan, that means applying AI into a range of fields such as industrial manufacturing, health care, and agriculture, largely under the banner of “deeply explor[ing] open application scenarios for ‘AI Plus,’” a Chinese initiative that serves as a rallying cry for AI diffusion in the real economy. The central government expects local governments and businesses to drive innovation under AI Plus and thus kickstart its much-needed economic growth.
Silicon Valley does not face the same venture capital constraints China does. Even so, the Trump administration’s strategy places renewed emphasis on accelerating broad adoption of AI throughout key sectors such as healthcare, energy, agriculture, and financial services. Concretely, the U.S. plans to create safe testing environments where AI tools can be tried out without full regulations getting in the way, and to run focused programs through the National Institute of Standards and Technology to help specific industries adopt AI more quickly.
U.S. efforts to streamline overly burdensome regulation aren’t new. The Biden administration sought to make it easier to secure permits for AI infrastructure such as data centers and power sources. But these were introduced in the final days of the Biden administration, whereas Trump has taken a more aggressive stance from the start. Moreover, Biden’s efforts to facilitate new AI infrastructure construction were not accompanied by equivalent efforts, as seen in the new U.S. AI Action Plan, to remove industry-specific barriers to AI adoption in regulated sectors like healthcare or finance.
Ultimately, Washington’s tactics differ from Beijing’s, but both sides have clearly landed on the same core idea: AI is now at a point where it can make real contributions to the overall domestic economy, and the government should help usher this along.
The Race for Global Diffusion and Standard Setting
Second, AI diffusion is a key part of both countries’ economic strategies. Both AI plans confirm that Washington and Beijing see themselves as racing to capture global market share and dominate standard setting—a race they consider increasingly central to their geostrategic ambitions. What’s new here isn’t the race itself, but rather, the two countries’ convergence on a shared urgency and a common set of approaches involving open-source models and government-driven export promotion.
China’s AI action plan is replete with references to openness and open-source AI. Among other things, it calls for “strengthen[ing] the open-source ecosystem by enhancing compatibility, adaptation, and inter-connectivity between upstream and downstream products, and enabl[ing] the open flow of non-sensitive technology resources.” For China, building out and expanding its open-source ecosystem reflects the natural extension of its built-in comparative advantage in cheaper, less compute-intensive open-weight models. What has changed since the “DeepSeek moment” in January is that China views its own models as internationally competitive.
As a companion to its AI action plan, China also recently announced a World AI Cooperation Organization (WAICO) to concretize China’s ambitions of serving as a key hub of multilateral AI development institutions and a bridge to the Global South. Efforts to export the technology through multilateral, collaborative venues are a longstanding part of Chinese strategy. However, the launch of WAICO and more confident globalist rhetoric of China’s AI action plan mark a clear intensification of this strategy, likely because Beijing senses a void it can fill in the “America First” era.
Yet Washington, too, has growing ambitions for global AI primacy. In particular, the U.S. AI Action Plan calls for so-called full-stack AI export packages—the government-facilitated tying of multiple AI technologies into a single commercial offering. In a notable departure from recent cuts to foreign assistance, the White House is prepared to support the diffusion of these packages through institutions such as the U.S. International Development Finance Corporation. This active promotion of AI exports reflects a significant shift from the previous administration, which largely left companies on their own to promote their own products.
One notable difference between the U.S. and Chinese global diffusion strategies is that Washington intends to make exports of its AI stacks selective—especially those containing cutting-edge elements. It plans to prioritize countries willing to join what it calls “America’s AI alliance” and align their export control policies against U.S. adversaries. Washington is likely to focus on NATO allies and other close partners, but it may also use AI exports as a tool to bring more firmly into its orbit countries that have traditionally balanced between the U.S. and China, such as the Gulf states.
Like China, the new administration’s AI Action Plan goes all-in on open-source models as a means of global diffusion. The U.S. plan extols the benefits of open-source while omitting any discussion of risks—a significant shift from the Biden administration’s more cautious and equivocal approach. The Biden administration had carried out an exhaustive analysis of open-source AI, eventually finding that government restrictions were premature while nevertheless calling for continued vigilance in an array of risk areas. Trump’s plan, by contrast, draws a clear line in the sand that “the decision of whether and how to release an open or closed model is fundamentally up to the developer.” Seeing potential “geostrategic value” in U.S. open models “becom[ing] global standards in some areas of business and in academic research worldwide”, the Trump administration intends to “create a supportive environment for open models.”
While both governments view international diffusion as a way to expand global market share, they also use it to project their worldviews abroad. For Washington, this means using AI policy as a means of extending its anti-woke agenda: the U.S. AI Action plan calls for stripping references to misinformation, Diversity, Equity, and Inclusion (DEI), and climate change from its AI risk management frameworks, and for using federal procurement power to police LLMs for “ideological bias.” For Beijing, it involves vetting its models for ideological alignment with core socialist values before deployment, forcing models to refuse queries on politically sensitive issues such as Tiananmen Square. Even in the information control space, global economic dependencies on AI systems have become a pathway for expanding ideological reach outside of each countries’ borders.
Muted but Continuing Safety Efforts
Finally, both countries’ AI strategies treat safety as a legitimate concern that is nevertheless clearly less important than economic innovation. This, too, represents an odd kind of convergence—with each country moving in opposite directions from different starting points to end up in roughly the same place. For Washington, the enshrinement of safety as a secondary goal marks a downgrade from the Biden era, when it had roughly co-equal status alongside economic opportunity. Meanwhile, China’s discussion of frontier AI risks has slowly increased in the last few years. But because the U.S. safety ecosystem was much more mature to begin with, the two sides are still far apart in absolute terms.
The U.S. AI Action Plan offers the strongest indication that the Trump administration’s vision for AI dominance includes a strong pillar focused on responding to frontier AI risks. To be sure, Trump’s team does not refer to “AI safety” anywhere in this document. But a similar set of concerns are recast in other terms: “interpretability, control, and robustness,” “performance and reliability,” “impacts to critical services or infrastructure,” and “novel national security risks.”
Testing and evaluation earns its own separate section of the Plan. Two bodies within the U.S. Department of Commerce, the National Institute of Standards and Technology (NIST) and the Center for AI Standards and Innovation (CAISI)—the recently rebranded U.S. AI Safety Institute—have been empowered to conduct evaluations for cyber, biological, radioactive, nuclear, and explosive (CBRNE), offensive cyber, and national security risks. In addition, the U.S. Action Plan emphasizes the importance of biosecurity, federal capacity for incident response, and critical infrastructure cybersecurity.
Similarly, China’s Global AI Action Plan also underscores the importance of AI safety and emphasizes the importance of building out their testing and evaluation ecosystem. It also highlights the importance of emergency response, data security, and traceability management. For China, the increasing level of detail on potential safety measures is reflective of a slow but steady march toward articulating concerns about frontier AI risks in government documents.
Overall, the U.S. AI Action Plan demonstrates a more sophisticated and detailed approach toward evaluating and mitigating frontier AI risks compared to China. For example, while the Chinese plan vaguely references the need to “conduct timely risk assessment of AI and propose targeted prevention and response measures” and “explore categorized and tiered management approaches,” the U.S. plan outlines specific risk categories, designates particular agencies with evaluation responsibilities, and establishes concrete institutional frameworks like AI hackathon initiatives to test system vulnerabilities and technical standards for high-security data centers.
In both plans, AI safety overall is treated as a secondary priority. This muted approach to safety reflects a shared strategic calculation: both governments appear to believe AI risks can be managed after achieving competitive advantages, rather than constraining development upfront. Neither wants to implement strong safety measures that competitors might ignore. However, both maintain testing and evaluation frameworks that could enable more aggressive safety measures if risks materialize or competitive dynamics shift.
The Path Ahead
The U.S. and Chinese Action Plans outline different approaches in pursuit of similar goals: achieving gains from AI throughout the “real economy,” pursuing global market share for geostrategic purposes, and mitigating national security risks.
Ultimately, AI Action Plans will not determine whether the U.S. or China wins the AI race. The real determinants lie in how each government turns vision into practice—through budgets, staffing, research and development funds, and infrastructure reforms that shape the pace and scope of adoption.
Implementation is likely to look very different across the two systems. In the U.S., execution will hinge on coordination across a fragmented landscape of federal agencies, state governments, and private industry, where regulatory clarity and funding flows will determine whether ambitious goals translate into practice. In China, by contrast, implementation depends on the ability of central directives to mobilize provincial governments and major technology firms, a model that can scale quickly but is prone to uneven follow-through and local experimentation.
Both Washington and Beijing are pursuing strikingly similar goals, even if they are following different Action Plans. The real winner will not be the country with the better strategic vision, but one that executes theirs best.