How Trump Should Approach AI Talks With China: Targeted Dialogue, Maximum Pressure
At the upcoming Trump-Xi summit, Beijing will not negotiate in good faith on AI safety. A narrowly scoped dialogue paired with maximum pressure on export controls is the only way to shift Beijing’s calculus and secure long-term AI safety.

By experts and staff
- Published
Chris McGuireCFR ExpertSenior Fellow for China and Emerging Technologies
Chris McGuire is a senior fellow for China and emerging technologies at the Council on Foreign Relations. He led U.S.-China AI policy while serving in the National Security Council under President Joe Biden. He also previously served as the State Department’s lead expert on U.S.-Russia arms control policy during the first Trump administration.
U.S. President Donald Trump and Chinese President Xi Jinping plan to discuss issues related to artificial intelligence (AI) when they meet in Beijing—as they should. AI is increasingly underpinning global economic growth, driving technological innovation, and reshaping world battlefields. Modern AI models are the most powerful cybersecurity and hacking weapons that have ever been created, and they are doubling in capability every four months. AI is simply too important to ignore.
The United States and China are reportedly considering establishing a dialogue on AI safety. The Chinese government has long sought such a dialogue, but the unfortunate reality is the Chinese government’s willingness to make and abide by robust international commitments on AI safety is low. It views these dialogues as an opportunity to increase its access to technology that China needs to catch up to the United States in AI. If the United States and China do establish a regular AI dialogue, the only effective way to change the Chinese government’s calculus is to ensure it is exclusively focused on safety and to couple it with a “maximum pressure” campaign that tightens export controls and expands the U.S. lead in AI as much as possible.
The United States and China do have a shared interest in preventing the release of AI models with certain dangerous capabilities. If a non-state actor uses an AI model to develop a biological weapon, that could pose catastrophic risks to both the United States and China. Over the long term, addressing these risks will require cooperation.
But the United States cannot just wish good faith cooperation from China into existence. The Chinese government’s AI priorities are primarily driven by the risks of falling further behind the United States, not the risks posed by non-state actors using dangerous models. China is only eight months behind the United States in AI—a significant margin, but a gap that China believes it can overcome. As AI capabilities rapidly improve, Chinese AI-enabled cyberattacks and military and intelligence operations may soon be the largest national security threat that the United States and its allies face. And as AI rapidly becomes the largest driver of the global economy, U.S. and Chinese AI companies are fighting for global market share.
The smaller the gap between the United States and China, the more likely China will be able to use its own AI capabilities to hold the United States at risk—and the more economic gains will flow to Chinese companies. Beijing recognizes this dynamic, which is why it is engaging in an aggressive campaign to steal U.S. AI technology. These efforts include the smuggling of advanced U.S. AI chips, “distillation attacks” against U.S. models to illicitly replicate their capabilities, and other strategies.
The Chinese government’s view that AI safety dialogues are a means to close this capability gap was on full display when the United States and China held the only such dialogue in 2024 under President Joe Biden. The United States government sent leading technical experts who outlined areas of greatest shared risk; the Chinese government sent diplomats who complained about U.S. export controls on AI chips. Chinese AI companies and government leaders have repeatedly stated that U.S. export controls are the single biggest constraint on China’s AI development.
The Chinese government’s perspective on AI safety cooperation, and its behavior at past U.S.-China AI dialogues, is also consistent with, and informed by, its longstanding refusal to agree to substantive arms control measures with the United States. China views arms control with extreme skepticism, and China’s track record of abiding by arms control commitments it does make is poor. Leading People’s Liberation Army (PLA) military strategists have described arms control as a “struggle” that great powers use to protect their advantages, and have asserted that Soviet concessions to the United States in arms control negotiations weakened the Soviet Union’s strategic position and contributed to its decline. Make no mistake, the Chinese government would view any agreement to limit China’s AI capabilities as a form of arms control.
China’s skepticism of arms control also stems in part from the fact that it was never a party to a Cuban Missile Crisis-like event, which instilled in U.S. and Russian leaders and negotiators a visceral sense of responsibility to prevent global catastrophe. U.S.-Russia nuclear negotiations produced zero substantive results until the Cuban Missile Crisis. But in 1963, just nine months after that event, the two countries signed the Hotline Agreement and the Limited Test Ban Treaty, the first agreements to establish crisis communications systems and limit certain dangerous activities. Chinese leaders have no similar experience to draw from.
While a U.S.-China AI safety dialogue could help establish relationships and lay the foundation for substantive negotiations in the future, it will not change the perspective of the Chinese government on these issues. So long as China believes it has a chance of catching up with the United States in AI and does not fear reprisal from the United States for potential noncompliance, an effective U.S.-China agreement on AI safety is unattainable. China is currently extremely unlikely to agree to measures that would impose meaningful constraints on its ability to close the gap with the United States. And even if it did, any agreement would be impossible to verify—and China is unlikely to abide by it.
To reach an effective agreement on AI safety with China, the United States therefore must change the structural conditions informing the Chinese government’s current unwillingness to negotiate in good faith. There are three ways it could do so:
- Washington could give in to Beijing’s requests to loosen AI-related export controls and permit China to catch up to the United States in AI. The U.S. government would then have to hope that China both complied with any agreement and refrained from using its newly powerful AI capabilities to undermine U.S. national security.
- The United States could impose a “maximum pressure” campaign that seeks to increase the gap between U.S. and Chinese AI capabilities and increase Washington’s leverage by tightening export controls. This would eliminate Beijing’s access to U.S. technology that is currently driving its AI development.
- The United States could keep the status quo and wait for an external event—a “Cuban Missile Crisis” related to AI—that compels the Chinese government to value global priorities on AI safety ahead of its own priorities on AI capability development.
Of these, the second is the only responsible path, and by far the most effective one. If the Chinese government believed there to be a wide and rapidly expanding AI gap between the United States and China—and viewed existing U.S. AI capabilities as posing a profound risk to its national security—it would likely view negotiations that impose even modest constraints on U.S. AI capabilities as in the country’s national interest. China would have little leverage in these negotiations, but it would be far more likely to comply with any agreement. Beijing would fear detection and reprisal by Washington, enabled by its superior AI models.
If the United States significantly tightened export controls on China, it could expand the U.S. lead from eight months, to eighteen or twenty-four—an eternity in AI development. Chinese firms remain extremely dependent on U.S. computing power, which is the most critical input into AI development. China will only produce about 2 percent of the AI computing power of U.S. firms this year, and the computing power needs to develop and serve a leading AI model are increasing exponentially. U.S. export controls have materially slowed China’s AI development, but they contain significant loopholes that allow China to purchase U.S. AI chips, remotely access them via the cloud, smuggle them via third-countries, or use U.S. chipmaking technology to manufacture them. The presence of these loopholes is not an inevitability; it is a policy choice that can be changed.
Trump’s goal in Beijing should not be to reach an agreement with China on AI safety, but to create the conditions for such an agreement down the road. If the Trump administration does establish a dialogue with China on AI, it must set clear expectations with the Chinese that the dialogue will be narrowly focused on AI safety issues and not cover export controls. And simultaneously, any such dialogue must be coupled with a “maximum pressure” campaign that imposes robust export controls that close all existing loopholes to maximize the U.S. lead over China. Just as the United States and the Soviet Union never assisted each other’s nuclear weapons development programs, the United States and China should not assist the other’s efforts to develop advanced AI models.
The only alternatives to this approach are to give China the tools to catch up to the United States in AI and hope it operates in good faith, or wait for a global catastrophe to shock the Chinese into good faith cooperation. The first gambles the United States’ security on China’s goodwill; the second gambles it on a disaster terrible enough to change Beijing’s calculus. Maximum pressure with dialogue not only preserves U.S. AI leadership—it’s also the best way to achieve long-term AI safety.
This work represents the views and opinions solely of the author. The Council on Foreign Relations is an independent, nonpartisan membership organization, think tank, and publisher, and takes no institutional positions on matters of policy.