March 19, 2026 | Policy Brief
Deepfakes on the Front Lines: Iran’s AI Disinformation Campaign
March 19, 2026 | Policy Brief
Deepfakes on the Front Lines: Iran’s AI Disinformation Campaign
The war with Iran has triggered a torrent of disturbing AI-generated imagery, videos, and narratives circulating widely on social media — such as smoke billowing from a Bahraini high-rise or rockets striking directly in the heart of Tel Aviv. The majority of these videos are produced by Iranian government-linked influence networks and are propagated by the Russian and Chinese information ecosystems — the web of state media, social platforms, and online actors through which Moscow and Beijing spread their message. This strategy exemplifies the authoritarian media playbook in action: the Iran-Russia-China-North Korea axis shares technology best practices with each other and then amplifies mutually beneficial anti-Western propaganda.
The advancement of AI tools in recent months, particularly the advent of AI agents that can act without human oversight, has made the creation of synthetic disinformation easier than ever. AI enables scalable psychological operations targeting both domestic and international audiences. The New York Times has identified in the past two weeks more than 110 unique deepfakes that convey a pro-Iran message through battlefield images, missile strike depictions, and general war footage. Much like the deepfake deluge during the June 2025 Iran war, when Iranian accounts spread fake videos of prominent Israeli landmarks ablaze and repurposed battle footage from other conflicts, the purpose of this content is to push a false narrative of Iranian military success and Western failure.
How the Axis of Disinformation Works
Cooperation between members of the authoritarian axis is especially effective because it doesn’t require close coordination or centralized command. Rather, each leverages its own existing information warfare infrastructure to spread similar narratives, benefitting from shared investment toward the common goal of destabilization.
While Iran produces deepfakes of downed American fighter jets being paraded through Tehran, or pro-regime content for fake Western influencer accounts, Russia and China amplify these posts. Russia has longstanding expertise in laundering disinformation and using bot networks on social media. China uses state-aligned media accounts to echo anti-U.S. narratives in order to compound confusion about what is happening on the ground. Last week, Chinese state media shared a claim that Iran shot down an American F-15 jet, a pro-China account posted a fake image claiming that the Iraqi resistance had shot down a U.S. KC-135 refueling aircraft, and another pro-China account claimed that Israeli Prime Minister Benjamin Netanyahu had fled his country.
Solutions Offered That Aren’t Really Solutions
Some social media platforms have begun to address the problem of AI-generated falsehoods. X announced on March 3 that it would punish creators who post AI war videos without labeling them as AI by booting them from X’s revenue-sharing program for 90 days. This addresses the problem of rage-bait influencers who seek to monetize controversial content but does nothing to deter state-aligned accounts whose purpose is to spread disinformation, not to make a profit. For social media companies that have been extremely reluctant to address AI-generated disinformation, this is an important step, but X’s policy must begin to consider nonfinancial consequences to disempower state-sponsored sources of disinformation. These principles should be extended beyond active conflicts, which are currently the policy’s only target.
Government and the Private Sector Should Work in Tandem To Fight Information Warfare
Washington has a critical role to play in combating the spread of AI-enabled disinformation. However, significant cuts to the FBI’s Foreign Influence Task Force, the State Department’s Global Engagement Center, and the Foreign Malign Influence Center at the Office of the Director of National Intelligence have greatly diminished the government’s ability to counter foreign influence operations.
To meet the evolving threats of AI propaganda and deter its axis of adversaries from sowing chaos domestically, the administration must rebuild and expand each of the aforementioned offices and continue to adapt their methods to the rapidly evolving media ecosystem. Ultimately, it is only with the public and private sectors both working concurrently to combat information warfare, the former by implementing stronger content moderation policies and the latter by reinvesting in its own institutions, that the United States will be able to regain its once significant advantage in this sphere.
Leah Siskind is a research fellow at the Foundation for Defense of Democracies’ (FDD’s) Center on Cyber and Technology Innovation (CCTI), where Marina Chernin is an intern. For more analysis from the authors and FDD, please subscribe HERE. Follow FDD on X @FDDand @FDD_CCTI. FDD is a Washington, DC-based, nonpartisan research institute focusing on foreign policy and national security.