Iran is Winning: AI-Generated Fake News or Real?

Burning Burj Khalifa in Dubai: dramatic AI videos depict the iconic skyscraper in flames, falsely tied to Iranian retaliation. Fact-checkers (e.g., Full Fact, BBC Verify) confirmed it was AI generated via SynthID watermarks. Yet, the video amassed tens of millions of views across platforms.

The phrase “World War III” has flooded global headlines and social feeds for over a week. It began February 28, with a US-Israel strike that killed Iran’s Supreme Leader Ayatollah Ali Khamenei in his fortified Tehran compound. Since then, social media has turned into the primary battlefield: raw, chaotic, and often convincingly false.

A viral post claimed Iranian strikes devastated a key US radar base in Qatar. Satellite imagery shared by Tehran Times showed craters gouged into the ground, collapsed structures, and rising smoke. To the average online user, this was seemingly clear proof of a major blow to American defenses in the Gulf. The image exploded on X, gaining hundreds of thousands of views in hours, with comments erupting in outrage, fear, and calls for retaliation. Panic spread quickly.

None of it was real.

The “evidence” was an AI-generated fake, manipulated from an older Google Earth photo of a Bahrain base and flagged by tools like Google’s SynthID. It is one of many deepfakes inundating platforms since the conflict escalated. In what many are calling the first major “AI war,” hyper-realistic fabrications spread faster than fact-checks, blurring truth and propaganda and risking global instability.

What Is AI-Generated Fake News? 

AI-generated fake news involves false or misleading content produced or altered using generative tools. These include deepfake software for videos/audio, text-to-image models like Midjourney or DALL-E, and video generators such as OpenAI’s Sora or Google’s Veo.

The results mimic real events: fabricated explosions, doctored satellite images showing nonexistent damage, or simulated leader statements. Detection clues include inconsistent lighting, anatomical glitches (e.g., deformed hands), unnatural motion, or watermarks like SynthID or Meta AI labels. However, AI’s rapid advancements make fakes harder and harder to spot.

In the current conflict, state-linked accounts and creators have exploited these tools to fabricate “victories” or exaggerate damage, often monetized through views.

Why Do These Fakes Spread So Rapidly?

Billions rely on social media for news, where algorithms favor emotional, engaging content over verified facts. In surveyed markets, social video has surged as a news source, with around 65% of people accessing news via video platforms. This was predominantly driven by younger users preferring short-form clips.

Platforms like X and TikTok enable instant sharing, amplified by bots, influencers, and coordinated accounts. A single video from the Iran conflict reportedly reached tens of millions of views across platforms before it was debunked as a fake, outpacing corrections and fueling confusion, distrust, and potential escalation.

AI-Generated Misinformation Floods the Iran-US Conflict

Here are some of the most viral AI fakes circulating since the conflict intensified:

  • AI image of Khamenei’s body under rubble: after reports of his death in US-Israel strikes, photos spread showing Iran’s Supreme Leader buried in debris, “discovered” by rescuers. Reuters and Google’s SynthID flagged it as AI, however it was still viewed hundreds of thousands of times on X and Facebook, shared by pro-US accounts.
  • Manipulated satellite images of US base damage: posts from Tehran Times showed a “destroyed” radar in Qatar or Bahrain. Analyses revealed AI alterations from real Google Earth bases, with no matching damage existing.
  • Burning Burj Khalifa in Dubai: dramatic AI videos depict the iconic skyscraper in flames, falsely tied to Iranian retaliation. Fact-checkers (e.g., Full Fact, BBC Verify) confirmed it was AI generated via SynthID watermarks. Yet, the video amassed tens of millions of views across platforms.

These originate from bots, state media (e.g., Iranian outlets broadcasting fakes), opportunistic creators, and influence operations. Genuine events risk being dismissed as fake, while propaganda sways opinion and pressures leaders.

Historically, misinformation has sparked chaos. From the 1938 War of the Worlds panic to Pizzagate’s armed incident. In this conflict, exaggerated Iranian “successes” boost domestic morale but risk real escalation if believed abroad. This creates a “liar’s dividend”, which works to erode trust in journalism and diplomacy.

The Pew Research Center’s recent analyses highlight the scale of this challenge. Americans largely foresee AI having negative effects on news and journalists, with many struggling to discern truth amid generative AI as a news provider and deepfakes proliferating. Data shows that 51 % of U.S. adults find it difficult to determine what’s true in news. This figure becomes more worrying when understanding that around 90% of Americans encounter inaccurate information.

As Pew notes, “In 2026, the biggest challenge for news consumers won’t be finding information, it will be figuring out what to trust and how to make sense of a deluge of competing narratives and facts.”

What Can Be Done to Combat the AI Misinformation War?

The surge in AI-driven misinformation is escalating into a full-fledged warfare tactic, posing serious risks to global stability. Social platforms are struggling to keep pace, even as they roll out countermeasures.

X has introduced penalties, including 90-day suspensions from its Creator Revenue Sharing program for users who post undisclosed AI-generated videos depicting armed conflicts. Yet enforcement still leans heavily on Community Notes, metadata analysis, and user reports. These are tools which often react too slowly to viral deception.

Regulatory efforts are progressing, but remain fragmented and lag behind the speed of AI innovation, leaving a clear gap between policy and real-world threats:

  • EU AI Act: bans high-risk deceptive AI and mandates transparency/labeling. Most rules apply from August 2, 2026, with platforms required to mitigate systemic risks under the Digital Services Act.
  • US: state-level deepfake bans, including the California elections. Used in unison with federal proposals like the Protect Elections from Deceptive AI Act and ‘TAKE IT DOWN’ Act for non-consensual imagery.
  • UAE/Dubai: strict cybercrime laws impose fines of up to AED 200,000 (€47,000) and jail (up to 2 years) for spreading false news or manipulating content during crises. Harsher penalties apply if it incites panic or harms security.

Regulating fake AI-generated content remains challenging. Enforcement is global and difficult, free speech concerns arise, and state actors often ignore rules. Public education on tools like reverse image search, universal watermarks, improved detection, and platform accountability have been suggested as incredibly important to reduce risks. Yet there is no systematic way of accountability that currently exists.

Will AI-Generated Fake News Cost Trump His War?

Misinformation over the past weeks is not accidental. It is strategic warfare conducted in the information domain, with some believing it has the potential to cost Trump victory. Invented scenes of Iranian dominance (destroyed installations that never burned, missiles that never landed) strengthen morale inside Iran while quietly sowing hesitation and division outside it. When large numbers accept these falsehoods as fact, support for continued engagement can soften. This leads to decision-makers facing mounting pressure to act prematurely, and trustworthy intelligence starts to be ignored.

The so-called liar’s dividend is already doing damage: real developments are dismissed as hoaxes, positions become entrenched, and any path to negotiation narrows. In a war that began on Trump’s watch, sustained deception risks widening cracks in American unity. A CNN/SSRS poll conducted found that 59% of Americans disapprove of the decision to take military action in Iran and 60% believe Trump lacks a clear plan for handling the situation. That reluctance could translate into political pressure that forces missteps, potentially prolonging the conflict or accidentally widening it.

Today the side that controls perception can be as decisive as the side that controls territory. Verifying content before amplifying it is not polite etiquette. It is a frontline defense against misinformation determining the war’s conclusion.

Author: Grace Sharp

See Also:

Anthropic Defies Pentagon: Trump Bans Claude AI in Military Dispute

Biggest AI Surveillance Scandals Threatening Europe’s Privacy in 2026

Share this article

Latest news

Subscribe to our newsletter!

More News