November 26, 2025 | Policy Brief

DeepSeek May Intentionally Produce Malicious Code Due to Chinese Political Bias, Research Shows

November 26, 2025 | Policy Brief

DeepSeek May Intentionally Produce Malicious Code Due to Chinese Political Bias, Research Shows

China’s top artificial intelligence (AI) models don’t simply follow the Communist Party line — they act on it. On November 20, the cybersecurity firm CrowdStrike published a study suggesting that AI models produced by Chinese firm DeepSeek were prone to writing flawed computer code when prompted using references that Beijing deems politically sensitive, particularly those related to repressed minority groups.

The report, which follows an investigation by the National Institute of Standards and Technology into DeepSeek’s security protocols, highlights the growing combined threats of adversarial influence and cyber espionage posed by China’s open-source AI models.

The report found that while DeepSeek performed coding tasks with a high degree of proficiency, its efficacy declined dramatically when exposed to slogans deemed politically sensitive to the Chinese Communist Party (CCP). DeepSeek’s less powerful models performed predictably worse than better resourced American competitors. However, the firm’s top-line model produced secure computer code to comply with users’ requests on par with its most direct Western-produced competitors, highlighting the firm’s success in fielding a competitive alternative.

Nonetheless, the quality of DeepSeek’s code appeared to erode dramatically following the introduction of terms such as the Uyghurs, Tibet, or Xinjiang into users’ requests for software products intended to serve those regions or minority groups. While Western models could experience small declines in quality when prompted with these terms, DeepSeek’s responses often contained significant security vulnerabilities that would allow hackers to steal data or take over systems remotely. Notably, these issues appeared to occur due to DeepSeek’s inherent biases rather than its reliance on poor training data, as researchers discovered that vulnerabilities were added only after DeepSeek’s reasoning process ended rather than throughout its “thought process.”  

The results echo previous findings which suggest that DeepSeek, whose models have dramatically improved over time, struggles to overcome significant security vulnerabilities. From its initial launch last year, DeepSeek, as with other Chinese models that operate under Chinese law, refused to answer questions related to the CCP’s behavior, with the model showing signs of being trained on domestic propaganda. During a study conducted in September by the National Institute of Standards and Technology, DeepSeek’s models remained highly vulnerable to executing malicious prompts, including running malware or exfiltrating passwords — even as their overall quality in performing mathematics and scientific tasks remained on par with its American competitors.

However, DeepSeek, along with other Chinese open-source models, have grown increasingly popular in the United States. Given their malleable architecture, lower cost structures, and improved quality relative to more expensive, closed proprietary systems, Chinese open-source models have remained prevalent amongst both smaller American start-ups seeking funding for expansion and firms within emerging economies. This popularity is also partially a result of Chinese industrial policy, as Beijing has pushed firms to develop open-source platforms as a method to gain a growing customer base dependent on Chinese model architecture and prevent the formation of key chokepoints around AI software tools.

Better Open-Source Models Are the Key

The United States should counter these combined risks by strengthening procurement regulations while supporting open-source model development. Congress should work to ban Chinese AI models, including open-source models, from operating on critical infrastructure or government devices, both of which pose significant cybersecurity risks. These efforts should also extend to government contractors and service providers, to produce a trusted supply chain and create a market incentive to encourage firms to assess their own vulnerability.

These efforts should be paired with Washington prioritizing the development of American open-source models, particularly since U.S. firms have the competitive advantage of being able to access reams of high-quality data. While the Trump administration has issued an executive order to allow private developers to access datasets held by the National Laboratories, the Department of Energy should prioritize partnerships with firms working with open-source code — allowing the American people to directly benefit from their investment and countering a growing Chinese advantage.

Jack Burnham is a senior research analyst in the China Program at the Foundation for Defense of Democracies (FDD). Leah Siskind is a research fellow at the Center on Cyber and Technology Innovation (CCTI) at FDD, where she focuses on artificial intelligence. For more analysis from Jack, Leah and FDD, please subscribeHERE. Follow Jack on X@JackBurnham802. Follow Leah on X @Leahsiskind. Follow FDD on X@FDD and @FDD_CCTI. FDD is a Washington, DC-based, nonpartisan research institute focusing on national security and foreign policy.

Issues:

Issues:

Artificial Intelligence (AI) China Cyber

Topics:

Topics:

Washington China Donald Trump United States Congress Beijing Chinese Communist Party Uyghurs Xinjiang Jack Burnham Artificial intelligence United States Department of Energy Tibet Communist party National Institute of Standards and Technology