April 8, 2026 | Policy Brief
The American AI Sector Bands Together To Stop Chinese Theft
April 8, 2026 | Policy Brief
The American AI Sector Bands Together To Stop Chinese Theft
China’s effort to steal American artificial intelligence (AI) has begun to turn foes into friends.
Three American AI firms locked in an intense competition for market share — OpenAI, Anthropic, and Alphabet — announced plans on April 6 to share information with one another related to Chinese-linked industrial espionage through the Frontier Model Forum, a non-profit the trio founded with Microsoft in 2023. Under the proposal, the firms will collaborate to detect and publicly report on Chinese efforts to steal their proprietary information.
The announcement marks a strong first step in private-sector cooperation to secure the industry from Chinese attempts to steal American AI secrets, but the effort will require more explicit government oversight to facilitate further information sharing.
Chinese Distillation Targets Leading U.S. AI Labs
Each of the three labs has been targeted by distillation attacks likely conducted by Chinese-linked operatives. Distillation attacks allow a less-powerful “student” model to learn from a more powerful “teacher” model, gaining greater capabilities without requiring significant investments in training runs or necessarily hacking into any core systems. Along with stealing these firms’ intellectual property, distillation also poses a significant business risk, as many top Chinese models, allegedly trained on Western competitors, are either lower cost or free to run.
Allegations of Chinese distillation attacks began surfacing following Chinese firm DeepSeek’s release of its R1 reasoning model in January 2025. Shortly after R1’s launch, OpenAI and Microsoft alleged that DeepSeek had stolen components of ChatGPT, an allegation that OpenAI repeated in a letter to the China Select Committee in February 2026. Independent experts have also alleged that DeepSeek distilled portions of Google’s Gemini model following the release of an updated version of R1 in June 2025, while Anthropic accused three Chinese AI firms of conducting distillation attacks in February 2026.
Anti-Trust Law Complicates Private Sector Response
While these AI firms have come together to counter Chinese distillation attacks, their cooperation will likely be hampered by long-standing anti-trust law. Despite the Department of Justice (DOJ) and the Federal Trade Commission (FTC) releasing a joint opinion in 2014 stating that cybersecurity cooperation was not likely to violate anti-trust law, the opinion left a carve-out for any information that may reveal pertinent business details or impact pricing information. This gap has left AI firms cautious to act in the absence of clear legislation, particularly as cooperation on targeting distillation may result in prosecutions due to firms sharing significant quantities of user data or other information used to set prices.
The Trump administration has attempted to rectify these issues via executive action. As part of its AI Action Plan, released in July 2025, the administration proposed standing up an AI-Information Sharing and Analysis Center (AI-ISAC) led by the Department of Homeland Security to allow firms to share emerging threats amongst each other and with the federal government. However, in February 2026, the executive assistant director for cybersecurity at the Cybersecurity and Infrastructure Security Agency revealed that there was no timeline for establishing AI-ISAC.
AI Firms Should Borrow From Public-Private Models of Cybersecurity Cooperation
The American AI sector will remain vulnerable to Chinese espionage attempts so long as the industry cannot adequately cooperate to protect itself or communicate with the federal government without raising legitimate liability concerns.
While the Trump administration should accelerate its efforts to build out the AI-ISAC, Congress, DOJ, and the FTC each have a role in ensuring the program reaches its potential. The DOJ and FTC should consider issuing a new opinion on the legality of sharing information relating to distillation attacks that updates its previous advice. Congress should also consider amending the Cybersecurity Information Sharing Act of 2015 as part of a long-term reauthorization plan to explicitly cover AI firms cooperating amongst themselves and sharing pertinent information with the federal government.
Jack Burnham is a senior research analyst in the China Program at the Foundation for Defense of Democracies (FDD). Follow FDD on X @FDD. For more analysis from Jack and FDD, please subscribe HERE. Follow Jack on X @JackBurnham802. Follow FDD on X @FDD. FDD is a Washington, DC-based, nonpartisan research institute focusing on national security and foreign policy.