AI-generated war footage is hitting millions before fact-checkers finish their coffee, and Grok is flagging the real stuff as fake.
WIRED exposed Grok flagging real footage as fake, the BBC profiled creators monetizing synthetic war clips, and Foreign Policy tracked Iranian state deepfake operations.
X users are split between those sharing AI-generated war footage as real and those demanding platform accountability after Grok itself failed to flag obvious fakes.
On March 7th, a video surfaced on X showing what appeared to be Iranian ballistic missiles slamming into Tel Aviv's Azrieli towers. Within ninety minutes it had been viewed eleven million times. The footage was entirely synthetic — generated by an AI model that can produce photorealistic destruction in under four minutes. By the time fact-checkers posted their corrections, the clip had already been screenshot, reposted across Telegram channels, and embedded in at least three Arabic-language news aggregators [1].
This is the signature dynamic of what researchers are now calling the first AI-native war. The US-Israel conflict with Iran has produced not merely propaganda — every war does that — but an industrial-scale fabrication apparatus that operates faster than any human verification system can match [2]. The volume is staggering. France 24 documented hundreds of AI-generated videos circulating on X within the conflict's first week alone, despite the platform's stated policy crackdown [3].
The machinery has three distinct layers. At the top sits state-sponsored production. Foreign Policy reported that Iranian intelligence services are running coordinated deepfake operations designed to sway both domestic and Western audiences, deploying fabricated footage of American military casualties and staged civilian suffering [4]. The sophistication is notable: these are not crude cut-and-paste jobs but fluid, contextually plausible scenes rendered by models trained on real conflict footage.
Below the state actors sit the grifters. The BBC found a thriving cottage industry of creators using generative AI to produce fake war content purely for monetization — fabricated explosions, invented casualty reports, synthetic satellite imagery — all engineered to harvest engagement and ad revenue [5]. One creator interviewed by the BBC admitted to generating over forty fake war videos in a single week, each pulling tens of thousands of views. The Burj Khalifa engulfed in fire. Iranian missiles raining on Riyadh. None of it happened [6].
The third layer is perhaps the most corrosive: the platforms themselves. WIRED's investigation revealed that X's own AI assistant, Grok, has been actively failing at verification — in one documented case flagging authentic combat footage as AI-generated while letting obvious deepfakes pass unchallenged [1]. The inversion is almost poetic: the AI built to police AI content is itself contributing to the confusion. X has since announced it will "take action" against AI deepfakes of the war, though the specifics remain characteristically vague [7].
What makes this conflict categorically different from previous information wars is speed. During the 2022 Russian invasion of Ukraine, disinformation typically took hours or days to gain traction. Here, synthetic content reaches millions within minutes. Deepfake detection tools are struggling to keep pace — one analysis found detection systems scoring fabricated footage at just 0.1 percent likelihood of being fake [8]. The verification window has effectively collapsed.
The deeper damage is epistemic. When fabricated and authentic footage become visually indistinguishable, the rational response for ordinary viewers is not to trust nothing but to trust whatever confirms their priors. Mother Jones documented this precise phenomenon: audiences are not being fooled so much as they are selecting which reality to inhabit [6]. The deepfake does not need to convince — it merely needs to exist, providing a scaffold for belief.
Some technologists argue that provenance standards like C2PA metadata offer a path forward, embedding cryptographic proof of origin into media files at the moment of creation [8]. But adoption remains fractional, and the incentive structures run in the wrong direction. Generating fake content is cheap, fast, and profitable. Verifying it is slow, expensive, and thankless.
The Iran conflict will end. The infrastructure it has normalized will not.
-- DAVID CHEN, Shanghai