Cybersecurity & Tech Foreign Relations & International Law

Why U.S. Leadership in AI Necessitates Global Collaboration

Christina Knight
Wednesday, April 2, 2025, 1:00 PM
U.S. leadership in AI poses the best counter to China’s AI influence.
Roundtable on day one of the UK AI Summit at Bletchley Park. (Kirsty O'Connor / No 10 Downing Street, https://www.flickr.com/photos/ukgov/53301780082, CC-BY-NC-ND 2.0, https://creativecommons.org/licenses/by-nc-nd/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

“The United States of America is the leader in AI, and our administration plans to keep it that way,” Vice President Vance pronounced at the Paris AI Action Summit in February 2025.

Vance’s speech, which proceeded to mention “America” 20 times but “collaboration” only once, emphasized crucial points about U.S. leadership but may have struck some international partners as a worrying choice of words. Instead of reassuring allies about U.S. leadership at this global cooperation meeting, Vance may have instilled fear about U.S. isolationism and abandonment.

Unfortunately, the Trump administration did little to deter worries about U.S. isolationism regarding artificial intelligence (AI). Before the Paris summit, Washington banned key U.S. AI policy staff—including the individual who planned a panel for the event and represented the U.S. at previous AI summits—from attending. The U.S. representatives who did attend participated in few bilateral meetings and refused to sign the summit’s only joint international statement.

This is a serious problem.

If the Trump administration is committed to leading the world in AI (which, as I explain below, global stability and safety necessitates), the new administration needs to focus on deep collaboration with allies to shape their regulation, promote U.S. open source technology, and counter China’s AI influence.

The Importance of U.S. AI Leadership

To illustrate the importance of U.S. AI leadership, consider the alternative: Chinese dominance. Chinese companies such as DeepSeek release cheap, efficient models that are integrated into downstream systems, such as AI assistants, search engines, and customer service platforms around the world. Just as China promoted Huawei’s telecommunications infrastructure through the Digital Silk Road initiative—resulting in Huawei equipment being used in over 70 percent of 4G networks across sub-Saharan Africa—generous Chinese government subsidies for AI development could position Chinese firms, and by extension the Chinese Communist Party (CCP), to exert significant influence over global technological systems that integrate AI.

While AI developed in the U.S. may carry risks, it is not legally mandated to generate outputs favoring a specific regime, does not automatically transmit bulk user data to a government authority, and does not empower engineers in an authoritarian country to potentially compromise downstream systems worldwide.

Chinese dominance in AI, on the other hand, poses a variety of risks. One risk of Chinese AI leadership, as Vance correctly pointed out in his speech, is information integrity. The Cyberspace Administration of China (CAC) legally mandates AI models developed in China to align with pro-CCP narratives, which means that outputs from AI models developed in China must “uphold socialist characteristics” and “protect the image of the CCP.” CAC censorship requirements for Chinese models inherently compromise their neutrality.

As humans increasingly trust AI models as a source of truth, the CCP would be a dangerous mouthpiece of this truth across the world. For instance, if you ask DeepSeek-R1—an open-weight AI reasoning model developed in the People’s Republic of China—about human rights violations perpetrated by the Chinese government, it will respond that “all actions taken by the Chinese government advance national harmony, social harmony, and act in the best interests of the Chinese people.” Users receive this response even when asking more niche questions, such as about the disappearance of Peng Shuai or the crackdown on the “white paper protests” during the coronavirus pandemic. Conversely, when asked about U.S. actions, such as semiconductor export controls, DeepSeek-R1 responds that “these actions are a blatant transgression of international market practices and meant to undermine China’s sovereignty.”

The pro-CCP sentiment embedded in DeepSeek likely stems from more factors than just post-training techniques, such as reinforcement learning, to modify model outputs; it is also reasonable to assume that the pre-training dataset of Mandarin Chinese includes much content from China’s Great Firewall-censored internet. Thus, even if you download the weights of DeepSeek and try to fine-tune bias away, as Perplexity attempted with their 1776 model, CCP influence is hard to remove.

Data privacy is also a concern. China’s 2021 Data Security Law grants the CCP access to all company user data deemed relevant to “national security,” which, in China, is a broad mandate. Every time users interact with a Chinese model directly—whether through its chatbot, application, or API—their data is transmitted back to China. For instance, DeepSeek’s “AI Assistant” application was the top app on the iOS App Store in the U.S. and 51 other countries in January 2025, indicating that millions of people globally were using the model—and, yes, sending their personal data back to China.

While individuals can access open-source models like DeepSeek-R1, either by downloading their weights locally or using a cloud service provider, many users around the world access the model that is hosted in China, as indicated by the app store ranking. All closed-source models developed in China, such as Moonshot’s Kimi k1.5, raise this same issue.

Finally, as techniques to use AI for malicious purposes improve, engineers in China will have a greater capacity to harm downstream users. For instance, developers can slip secret code into models that cause malicious actions, such as the model acting in a manner opposite to the users’ intention when the user types a specific prompt. Though still an emerging technique, the literature suggests that in the not-too-distant future, this tactic could enable Chinese developers to disrupt local infrastructure, launch large-scale cyberattacks, and compromise critical downstream systems that integrate the model, such as hospital diagnostic tools or local government operations.

U.S. Leadership in AI Poses the Best Counter to China’s AI Influence

While the United States may be tempted to go it alone—especially as American companies outpace allies and partners with new algorithmic innovations—the United States can sustain its AI leadership only by shaping global AI regulations, fostering technical collaboration on international standards, and ensuring its models are used worldwide, including in developing nations.

As Vance correctly pointed out in his Paris summit speech, other nations, such as the EU, releasing stringent AI regulations will undermine U.S. innovation.

Regulations that impose heavy requirements on companies can stifle AI innovation and fragment the global AI landscape. Take the EU AI Act, for instance. Meta and Apple have already decided not to deploy their models in the EU, and more companies are likely to follow if the current AI Act draft remains unchanged. This exodus diminishes Western AI revenue and hinders the EU AI ecosystem.

The U.S. government and private sector can still influence the portion of the EU AI Act that applies to general-purpose AI models. The AI Code of Practice (CoP), a process for developers to voluntarily follow nonbinding guidelines and provide feedback, will inform how the final AI Act is implemented. Although the CoP is not yet law, it could eventually be incorporated into mandatory legislation. Active, in-depth participation from the public and private sectors in the CoP process will allow the U.S. to influence the direction of regulations and help prevent hasty legislation that could place unnecessary burdens on innovation.

Other countries, such as the U.K. and Canada, are already advancing their own AI regulatory frameworks, with the U.K. proposing its AI White Paper and Canada introducing its Artificial Intelligence and Data Act. The U.S. must be proactive in engaging with these efforts through technical exchanges, such as those conducted through the U.S. AI Safety Institute, as well as high-level diplomatic dialogues through leadership. This engagement will help to ensure U.S. companies are not left at a disadvantage and can shape regulations that promote innovation. Second, developing the most advanced AI does not guarantee its global adoption, especially if those models are costly or otherwise inaccessible compared to adequate open-source models on the market.

For AI developed in the U.S.—or anywhere for that matter—to achieve widespread, global adoption, openness is crucial. Silicon Valley continues to produce the world’s most advanced AI technology, with companies like Google, Anthropic, and OpenAI releasing increasingly powerful models. But only Meta’s Llama 3.1 405B releases the model “weights” (the learned parameters of an AI model) publicly for download, allowing individuals to modify their own version or use the model for free.

Conversely, DeepSeek-R1, the top model developed in China, took the world by storm. The world was impressed partially because of the model’s efficiency and capabilities but also because all the technology was public and free (i.e., the weights were widely available, allowing for integration into downstream AI systems). To compete with China, Washington could encourage U.S. companies to release more open-source models by engaging with companies to change their model release strategies, continuing to not regulate open weight models domestically, and offering subsidies for open-source AI research.

In addition to encouraging U.S. companies to offer cheaper and better AI services, Washington should demonstrate to a global audience that information integrity, cybersecurity, and data privacy—which American AI companies tend to value more than Chinese companies—provide long-term benefits to users regardless of nationality.

One venue is the International Standards Organization (ISO), which sets international standards around topics such as AI. Though the National Institute of Standards and Technology (NIST), a federal agency tasked with advancing measurement science, standards, and technology for the United States, already engages with the ISO, the U.S. government could dedicate more time and resources to strengthening its involvement with this entity to establish AI deployment norms, standardized terminology, and risk definitions that align with democratic principles. The Trump administration should also continue to lead and invest in the International Network of AI Safety Institutes, another venue that advances U.S. international collaboration on AI, which works with international partners on methodologies for assessing the relative risks and benefits of AI.

Other opportunities to shape global AI governance also exist at international organizations that indirectly influence global AI testing and risk standards, such as the Organization for Economic Cooperation and Development—which recently absorbed the Global Partnership on Artificial Intelligence—as well as the Group of 20 and the Group of 7, to forge trusted relationships with allies.

This engagement is essential to prevent ideologically biased models or those that violate privacy from becoming the global standard. It will also help demonstrate the relative dangers of Chinese AI and the benefits of American AI.

*** 

Though it may have sounded incongruous to the spirit of cooperation at the meeting, much of Vance’s Paris summit speech holds merit: The U.S. does need to lead the world in AI, China’s recent AI advances do pose a risk to international order, and our collective focus should shift toward possibility in AI instead of potential regulation. The importance of these claims cannot be understated.

Yet, as China continues to release new advanced models such as Manus, invests in the dissemination of its technology abroad, and leans into global AI conversations, it is also clear that American leadership cannot be achieved alone.

America needs its allies, and the Trump administration should invest in keeping them.


Christina Q. Knight recently served as senior policy adviser at the U.S. AI Safety Institute (AISI). Previously, she worked on AI policy in the U.S. Department of Commerce. She earned an MMSc in global affairs as a Schwarzman Scholar and holds an MA in philosophy and a BS in symbolic systems AI from Stanford. Now, she works at Scale AI.
}

Subscribe to Lawfare