June 21, 2025 | Policy Brief

Iran Flails on the Battlefield but Touts Fake Victories Online 

June 21, 2025 | Policy Brief

Iran Flails on the Battlefield but Touts Fake Victories Online 

A missile slowly streaking across the Tel Aviv skyline smashes into a 13-story residential building, which instantly collapses. A devastating attack that never happened.

While signs this video is AI generated are clear, it is still going viral. Press TV, an Iranian state-owned news channel run by Islamic Republic of Iran Broadcasting, posted the fake image on its X account. It’s one of many popular images and videos of Iranian missile attacks that are being posted on social media platforms amid the chaos of the unfolding conflict with Israel.

AI Images & Videos Celebrate the Destruction of Israel

The video of the missile hitting a residential building in Tel Aviv shows an unnaturally slow trajectory across the sky and a rapid, unrealistic collapse of the building. The video also contains a Google Veo logo in the bottom left corner — a watermark on videos generated by Google’s AI mode. But that didn’t stop Press TV and other Iranian social media accounts from posting the video as evidence of the regime’s military successes.

Elsewhere on X, Instagram, and TikTok, a video showing Israel’s Ben Gurion Airport reduced to rubble has received millions of views and thousands of likes. An AI-generated poster, made in the style of early 1900’s Israeli artwork, threatens Israelis to leave Tel Aviv immediately. Another image shows a large-scale bombardment of an Israeli urban center with fiery streaks and pillars of smoke billowing out. An AI-generated post seen more than 40 million times shows what looks like the three famous Azrieli skyscrapers in Tel Aviv crumbling and smoldering.  

AI Supercharges Disinformation Creation and Dissemination

Social media platforms like X, Facebook, and TikTok have been flooded with AI-generated disinformation due to the ease of its creation and the speed at which it can spread online. AI has radically expanded the impact that a single bad actor can have. Not only are malicious actors using it to create images and videos, but a single individual can also use AI to help create and deploy an army of automated accounts to accomplish what once required an entire network of individuals. A reverse-image search of another fake video of an Iranian strike, for example, reveals that the same video has been posted by hundreds of accounts. Even though platforms can promptly flag fake content, the volume of posts can overwhelm platforms’ ability to flag them. Moreover, bad actors can quickly alter and share social media content anonymously across multiple platforms — a strategy which often removes any forensic trace of the creator or the country of origin. These fake videos and images can start to shape public opinion before social media companies remove them.

Real-Time Labeling of Fake Content Is Challenging; Skepticism and Vigilance Are Needed

While low-quality fakes can be easy to spot, readers without good media literacy can still fall prey to deception. Platforms like X and Facebook rely heavily on AI tools to assess whether content is authentic or AI generated. These tools can themselves be fooled by AI images and videos. Grok AI chatbot serves as the primary AI content reviewer for X posts and therefore as an authoritative source for what is and is not accurate, but it regularly mislabels AI-generated posts as authentic and as a result spreads disinformation. It’s incumbent on companies like Grok creator xAI to stop verifying that images or videos are authentic when they are not technically capable of doing this on a consistent basis.

Audiences, too, should review conflict coverage on social media with a strong degree of skepticism and scrutiny. To confirm the accuracy of the posts, a reader should acquaint themselves with the typical traits of AI-generated content like unnatural details or repetitive patterns, verify the source of the information, cross-check it with trusted news sources, and even use tools like Google to reverse image search to see if the clip appears in other misleading posts. While misinformation is common in any emerging conflict, Iran and its supporters are using AI-generated content to try to influence the narrative about Tehran’s successes. As the war between Iran and Israel plays out in the physical and digital realm, vigilance is vital.

Leah Siskind is an artificial intelligence (AI) research fellow at FDD’s Center on Cyber and Technology Innovation where she focuses on the adversarial use of AI by state and non-state actors targeting the United States and its allies. For more analysis from the Leah and FDD, please subscribe HERE. Follow FDD on X @FDD, @FDD_CCTI, and @FDD_Iran. FDD is a Washington, DC-based, nonpartisan research institute focusing on national security and foreign policy.

Issues:

Issues:

Artificial Intelligence (AI) Information Warfare Iran Israel Israel at War

Topics:

Topics:

Iran Israel Tehran Twitter Washington Israelis Tel Aviv Facebook Google Instagram Artificial intelligence Press TV TikTok Islamic Republic of Iran Broadcasting Ben Gurion Airport