AI-Generated Fake Images & Videos Flood Online Amid Iran Conflict: How to Spot Them

by Anika Shah - Technology
0 comments

AI-Generated Disinformation Escalates During US-Iran Conflict

As the conflict between the United States and Iran intensifies, a surge of false and misleading images and videos is circulating online, fueled by artificial intelligence (AI) and manipulated media. These deceptive materials, spread across social media platforms, pose a significant challenge to discerning truth from fiction and highlight the growing threat of AI-powered disinformation in modern warfare.

The Rise of AI-Fabricated Content

The current conflict has seen the rapid dissemination of fabricated content, including AI-generated images depicting attacks on infrastructure and misleading videos claiming to indicate military strikes. A prime example involved images circulating on Facebook showing an alleged drone attack on the Burj Khalifa in Dubai, which were quickly debunked by Full Fact, a British non-profit fact-checking organization, as AI-created or altered [1].

Manipulated Media and Aged Footage Re-Emerging

Beyond entirely fabricated content, existing images and videos are being manipulated using AI to exaggerate events or present them out of context. Instances include a video falsely claiming to show a missile hitting the USS Abraham Lincoln, which the U.S. Military refuted as a “lie” [2]. Old footage, such as a video of an Algerian soccer team’s celebration, has been falsely presented as recent footage of Iranian missiles over Israel [2].

AI Detection and Verification Tools

Several tools are available to help verify the authenticity of images, and videos. Google Fact Check [3], Google Image Reverse Search, and Google’s Gemini AI model can be used to identify manipulated or AI-generated content. Other verification tools include TinEye, InVID, Hive AI detector, AI or Not, and Reality Defender. Gemini, for example, identified a purported image of Iran’s Supreme Leader Ayatollah Ali Khamenei as “created or modified through Google AI technology” by detecting a digital watermark and visual inconsistencies [4].

Expert Advice for Identifying Disinformation

Experts recommend several steps individuals can take to identify fake content: checking the original source of the media, verifying whether the information has been reported by reputable news organizations, and utilizing reverse image search tools. It’s crucial to assess the credibility of the account that first uploaded the content and its posting history.

The Motives Behind Disinformation

The proliferation of fake war-related images and videos is driven by both “information warfare” and commercial interests. Disinformation can be used to disrupt enemy operations and influence public opinion. Commercially, social media account owners may seek to increase traffic and generate revenue by exploiting the public’s interest in the conflict [4].

Challenges and Future Considerations

Verification becomes more complex when human intervention is added to AI-generated images. The lack of “AI literacy” among the general public makes them particularly vulnerable to accepting false information. Addressing this requires collaborative efforts from governments, civic groups, and media outlets to verify information and inform the public. The development and implementation of technical source verification methods, such as invisible watermarks, are also crucial steps in combating the spread of deepfakes and other forms of AI-generated disinformation.

Related Posts

Leave a Comment