June 12, 2025 | Policy Brief
New Report Shows How China Uses AI to Augment its Online Intelligence Operations
June 12, 2025 | Policy Brief
New Report Shows How China Uses AI to Augment its Online Intelligence Operations
Artificial intelligence (AI) empowers China to conduct operations that blur the line between collecting sensitive information and influencing public opinion, and a June 10 report by OpenAI explains how.
Chinese intelligence operations are already leveraging online platforms to target government employees, military personnel, and academic researchers to recruit them as intelligence sources. OpenAI’s report shows how commercial AI tools can increase the speed, scale, and possibly the success of these operations.
Chinese Operation Blurs Line Between Influence and Intelligence
One operation captured in the OpenAI report specifically matches a known pattern of behavior from Beijing. As in many other Chinese campaigns, the operators pretended to be geopolitical risk consultants. They used OpenAI models to develop biographies for online personas, generate various types of content, analyze datasets, and translate correspondence.
While Chinese actors have used many of these tactics before, this particular operation is notable because those running it blended elements of intelligence gathering with foreign malign influence. The operators used OpenAI’s models to develop personas on X posing as journalists and geopolitical analysts who OpenAI assessed likely conducted a covert influence operation.
At the same time, the operators used OpenAI’s models to refine and translate unspecified correspondence intended for a U.S. senator about a government nominee. While OpenAI could not confirm whether the operators actually sent the correspondence, it demonstrates the intent to influence a member of Congress or at least gather intelligence from them.
Beijing Likely Uses Contractors to Conduct Espionage and Influence Operations
The prompts the operators used with OpenAI’s models suggest they were likely proxies or contractors rather than members of Beijing’s Ministry of State Security. OpenAI assessed that several prompts related to cyberattacks “suggest[ed] a low level of expertise.” These operators also used OpenAI’s models to create promotional materials aimed at the likely client for their services, the Chinese government.
Earlier this year, the Foundation for Defense of Democracies (FDD) exposed an operation in which Beijing likely used an internet services firm that acted as a proxy for intelligence gathering by creating websites for fake consulting companies. OpenAI’s ability to directly observe prompts, and investigate where the prompts are coming from, provides strong evidence of Beijing’s use of proxies.
Washington Must Strengthen Public-Private Partnerships With Major AI Companies
More robust information sharing between AI companies and the U.S. government can help disrupt adversarial influence and intelligence operations. While Washington has many streams of intelligence, it does not have access to the internal data of major AI companies, like OpenAI, which are held privately. These companies can uniquely observe how adversaries exploit their products. They can also provide information like emails and phone numbers associated with adversaries’ accounts. Likewise, the companies can disclose the tactics regularly employed by America’s adversaries, all of which can aid federal law enforcement actions. Companies can supplement government actions to indict malicious actors or seize malicious domains with their own enforcement actions — such as suspending accounts engaged in malign behavior.
While threat intelligence sharing concerning AI threats may already occur between the government and AI companies, a formal public-private partnership can help streamline the process. This would allow AI companies and government agencies to inform each other’s investigations and work together to disrupt adversarial operations. Government agencies collaborate with private companies to share information about other cyber threats and mitigations already, but extending this practice beyond cybersecurity into further malicious online activity — from foreign malign influence to AI-enabled espionage — will make Americans safer.
Max Lesser is a senior analyst on emerging threats at the Center on Cyber and Technology Innovation (CCTI) at FDD. For more analysis from CCTI and FDD, please subscribe HERE. Follow FDD on X @FDD and @FDD_CCTI. FDD is a Washington, DC-based, nonpartisan research institute focusing on national security and foreign policy.