AI-Generated Fake War Imagery Proliferates: How False Information About Iran Conflict Sweeps Through Social Media

ChainNewsAbmedia

When the Russia-Ukraine war broke out in 2022, social media was flooded with low-quality fake images—whether Photoshop composites or screenshots taken from video games, movies, or old news reports with incorrect labels. Now, the same tactics are reappearing in the Iran conflict, but this time, a new type of deception tool that was not widespread in 2022 has emerged: AI tools that anyone can easily use to generate high-quality, realistic videos and images.

Hany Farid, a digital forensics professor at the University of California, Berkeley, points out that ten years ago, similar fake messages were limited to one or two pieces and were quickly exposed; today, there are hundreds, with a level of realism that is astonishing. “Not only are they convincing, but they are fermenting—deeply influencing people. Everyone believes them and continues to share and spread.”

The proliferation of generative AI has drastically lowered the barrier to creating fakes.

Shayan Sardarizadeh, a senior BBC Verify reporter who has long tracked battlefield misinformation, states that the most significant change over the past year is the greatly reduced threshold for accessing generative AI. “Now anyone can produce highly convincing videos and images that look like major war scenes, and they are difficult to distinguish from real footage with the naked eye or non-expert analysis.”

Within less than two weeks of the Iran conflict starting, Sardarizadeh and other experts confirmed that multiple AI-generated fake videos had accumulated tens of millions of views across major social platforms.

What fake images are circulating?

The identified AI-forged content covers a wide range, including:

  • Fictional scenes of Iranian missile strikes on Tel Aviv, Israel

  • Panic scenes of people fleeing an attack at Tel Aviv airport

  • Videos of U.S. special forces being escorted by Iranian soldiers with guns

  • “Surveillance footage” claiming to show Iranian military facilities being destroyed (three of these are AI-generated, one is a real event from last year)

  • U.S. military ground convoy operations inside Iran

  • Footage of U.S. crash site debris parading on the streets of Tehran

In static images, there are scenes of U.S. military bases and the U.S. embassy burning after attacks by Iran, images of Iran’s Supreme Leader Khamenei being crushed under rubble, and scenes of Iranian civilians mourning casualties. Some media affiliated with the Iranian government have even released a forged satellite image claiming to show damage to the U.S. military base in Bahrain.

These are just the tip of the iceberg of current fake messages related to Iran circulating online.

Platform lax controls make it harder to curb misinformation

Despite Sardarizadeh and others continuously debunking fake messages daily, new forged content appears at a rate far exceeding fact-checking efforts, with such high realism that ordinary users find it difficult to identify the fakes while scrolling through feeds.

Some widely circulated fake messages clearly originate from pro-Iran accounts with propaganda motives. But the motivations behind many other fakes are harder to determine—possibly for traffic, influence, or profit, or simply because creating them has become too easy.

Farid highlights the current dilemma: “Content is more convincing, more numerous, and more deeply infiltrated—this is our current reality, and it’s very chaotic.”

Last week, platform X announced that paid creators who post unmarked AI-generated battlefield images would be suspended from earning revenue for 90 days, with repeat offenders banned permanently. However, Farid doubts the effectiveness of this policy, and most X users are not even part of the creator monetization program. TikTok and Meta (which owns Facebook and Instagram) have not responded to CNN’s requests for comment.

Even more concerning is that X’s own AI chatbot, Grok, has been repeatedly pointed out by Sardarizadeh for actually aiding misinformation—incorrectly informing users that multiple AI-generated videos are real footage.

How can we avoid being deceived by fake messages?

Farid admits that even the “AI fake image detection techniques” circulating a few months ago are now almost ineffective. In the past, one could look for clues like the number of fingers or body proportions, but today’s AI has corrected such obvious errors.

His fundamental advice is to actively seek information from reputable news sources rather than relying on strangers on social media. “In times of global conflict, social media is not the place to get your information.”

For users who still need to browse social media frequently, experts recommend:

  • Slow down: When encountering sensational battlefield images, take a few seconds to verify before sharing or believing

  • Observe details: Are the audio and visuals synchronized? Do the features match the real environment? AI still has flaws; some generated content still bears watermarks

  • Consult professional fact-checkers: Check if reputable fact-checking organizations or domain experts have commented on the image

  • Pay attention to comments: Sometimes ordinary users can raise valid doubts

  • Use AI detection tools: Although imperfect, they still provide some reference

The trend is worrying, and future challenges will be greater

Sardarizadeh urges the public to “train their eyes” to recognize AI content. But he also admits, “Detecting AI-generated content is becoming extremely difficult, and the trend shows it will only get harder.”

Under the dual pressures of ongoing AI technology advancement and lax social platform controls, the battlefield of digital misinformation is expanding at an unprecedented speed, with every mobile user standing at the front line of this information war.

This article, “AI Fake War Images Flood the Scene: How Iran Conflict Misinformation Takes Over Social Media,” first appeared on ABMedia.

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments