February 5, 2025 | Policy Brief
America’s Adversaries Use AI for Malign Influence, But Not to Great Effect … Yet
February 5, 2025 | Policy Brief
America’s Adversaries Use AI for Malign Influence, But Not to Great Effect … Yet
Iran, China, and Russia all used Google’s generative artificial intelligence (AI) tool Gemini to accelerate their influence operations (IO), Google warned in a new report published on January 29. At the same time, Google concluded that AI is “not yet the game-changer it is sometimes portrayed to be.” Still, the report signals that the United States has a prime opportunity to preempt and prevent AI-driven tactics that adversaries may use to greater effect in future influence operations.
Iran Is the Heaviest User of Google’s AI for Adversarial Activity
Google reported that Iran accounted for three-quarters of all cases where adversaries used Gemini for IO. OpenAI similarly discovered that Iran used its products last August to develop content for IO targeting the 2024 U.S. presidential elections. FDD also previously found evidence of Iran using AI on a website targeting African American voters with political messaging in the lead-up to the elections. However, FDD could not determine the specific AI model used.
Moreover, Google’s report reveals that Iran uses AI not only to create content for its malign influence operations but also for other purposes. Specifically, Iran used AI to rewrite its articles to make them read like a native English speaker wrote them and to improve the search engine optimization of its content in attempts to reach more people.
Google’s Report Reveals AI’s Role in Planning Influence Operations
Much of the media conversation surrounding generative AI in influence operations has focused on how AI can help adversaries generate content more quickly and make that content appear more realistic. Google’s report, by contrast, highlights how AI helps adversaries plan their influence operations.
Iran used Gemini to “provide details on economic and political issues in Iran, the US, the Middle East, and Europe,” according to Google’s report. Iran likely did this to polarize and manipulate target audiences. Iran notably queried Gemini about hot-button issues within its own borders, presumably to develop campaigns to sway its own citizens.
China similarly used Gemini to research current events in the United States and Taiwan, likely to leverage Gemini’s results in its operations and to understand the features of various social media platforms. This understanding would allow operators to select the best platforms for a given campaign. Interestingly, Google stated that China also used Gemini to “research foreign press coverage about China,” likely to gauge its target audiences’ perceptions of Beijing.
Russia leveraged Gemini in more creative ways, said Google, such as developing “content strategy for different social media platforms and regions” and brainstorming “ideas for a PR campaign and accompanying visual designs.” Russia also gauged the possibility of creating an online chatbot and “focused on the generative AI landscape,” which Google assessed “may indicate an interest in developing native capabilities in AI on infrastructure they control.”
AI’s Role in Malign Influence Still Low Impact, But America Should Preempt Future Threats
Google’s report aligns with FDD’s own assessment that AI did not significantly transform adversarial influence operations in the lead-up to the 2024 U.S. presidential elections. Previous reports by OpenAI similarly conclude that when U.S. adversaries used America’s AI products, they did not meaningfully increase audience engagement or the impact of the operation. A mid-September assessment from the U.S. intelligence community also determined that while generative AI helped “improve and accelerate aspects of foreign influence operations,” it did not “revolutionize such operations.”
Yet America should not be complacent. The U.S. intelligence community should explore how nation-state adversaries — as well as foreign terrorist organizations such as Hezbollah that have historically launched influence operations — may use generative AI in attempts to manipulate and deceive Americans. In addition to malign influence, this study should explore how both state and non-state actors can, are, or may one day soon exploit AI in cyberattacks, espionage, and terrorist operations. That generative AI has not yet revolutionized influence operations does not mean that adversaries will not continue trying to exploit this emerging technology.
Max Lesser is a senior analyst on emerging threats at the Center on Cyber and Technology Innovation (CCTI) at the Foundation for Defense of Democracies (FDD), where Mariam Davlashelidze is an intern. For more analysis from the authors, CCTI, and FDD’s Iran Program, please subscribe HERE. Follow FDD on X @FDD and @FDD_CCTI. FDD is a Washington, DC-based, nonpartisan research institute focused on national security and foreign policy.