A pioneering study on the fake videos war has revealed its impact on the media and its implications for social media companies, the media and governments, according to its authors published in the magazine ‘Plos One’. The so-called ‘deepfakes’ are artificially manipulated audiovisual material. Most involve the production of a fake “face” built by Artificial Intelligence, which is merged with an authentic video, in order to create a video of an event that never really took place.
Although fake, they can appear convincing and are often produced to imitate or imitate a person. Researchers from University College Cork (UCC), in Ireland, examined tweets during the current Russian-Ukrainian war, in what constitutes the first analysis of the use of deepfakes in disinformation and war propaganda. In the study they analyzed about 5,000 tuits in X (formerly Twitter) in the first seven months of 2022 to explore how people react to deepfake content online, and to uncover evidence of the previously theorized damage of deepfakes on trust.
As deepfake technology becomes increasingly accessible, it is important to understand how these threats arise on social media. The war in ukraine It was the first real example of the use of ‘deepfakes’ in war conflicts. Researchers highlight examples of deepfake videos during this war, including the use of video game images as proof of the urban myth of the fighter pilot ‘The Ghost of Kiev’, a ‘deepfake’ of Russian President Vladimir Putinwhich showed the Russian president announcing peace with Ukraine, and the hacking of a Ukrainian news website to display a fake surrender message from Ukrainian President Volodymyr Zelensky.
According to the study, fear of deepfakes often undermined users’ trust in the images they received of the conflict, to the point that they lost confidence in any image coming from the conflict. The study is also the first of its kind to find evidence of online conspiracy theories incorporating deepfakes. The researchers discovered that much of the real media was being labeled as ‘deepfakes’.
The study showed that lack of literacy about this phenomenon gave rise to significant misunderstandings about what constitutes a ‘deepfake’, demonstrating the need to promote literacy in these new forms of media. However, the study shows that efforts to raise awareness about deepfakes can undermine trust in legitimate videos.