Why deepfakes could endanger democracies and even trigger wars


Dusseldorf, Berlin This call will probably go down in the crime story. It was an emergency, the boss said to his co-worker. 220,000 euros must transfer this to an account in Hungary, and immediately. The man, employed at the English subsidiary of a German energy company, did as he was told. Why should he have doubted? The voice sounded real – the speech melody, the German accent. The transaction took place.

The employee did not speak with his boss, but with a cheater. He used a software for voice imitation and thus produced a barely recognizable fake. A week and a half ago, the insurer Euler Hermes has made the case public – and thus triggered headlines worldwide: Criminals may have used artificial intelligence (AI) for the first time to impersonate someone else.

Although the insurer has not presented any evidence, the first fraud involving Deepfake is mentioned. The term alludes to the fact that the technology in the background is referred to as “deep learning” – a special discipline of artificial intelligence: It takes the human brain as a model and uses several layers of artificial neural networks for information processing. The result is images, videos and sound sequences that look real but are artificially created.

220,000 euros – the amount of damage that Euler Hermes had to take – are nothing compared to the destructive potential that deepfakes can develop. Experts warn that a new era of disinformation is imminent. Fake videos could provoke social crises or fuel panic on the financial markets. The mere possibility of counterfeiting could further undermine confidence in democratic institutions, further polarize debates and deepen social divisions. “For nations, the window of opportunity to protect against the potential threats of deepfakes closes before they cause a catastrophe,” warns Charlotte Stanton of the Carnegie Endowment for International Peace.

The Federal Government has recognized the danger, even if it strives for a sober language. The fact that “false-information generated by deep-fakes to influence the public” are disseminated, “basically can not be ruled out,” explains the Ministry of Interior.

Therefore, the authorities were preparing: “To detect or combat crime in cyberspace using new technological methods,” federal security agencies “are constantly striving to develop their own analytical, investigative, and law enforcement capabilities.” The methods of machine learning that could be used to make deepfakes “are also used to specifically support the detection of so-called deepfakes.”

However, a reply from Secretary of State Klaus Vitt to a written question from FDP member Konstantin Kuhle raises doubts that the federal government would really be able to identify deepfakes. “Approaches from science and research for the detection of so-called deep fakes are known to the security authorities of the federal government, but these are essentially basic research,” it says in the letter, which is the Handelsblatt.

The Federal Government does not want to give more details on its forensic instruments: The “state welfare” would oppose the disclosure of “police and intelligence security procedures”.

Porn videos as a playground for Deepfaker

Deepfakes first attracted wide publicity in 2017, when pornographic videos featuring celebrity actresses surfaced on Reddit's anarchic portal – using software to bring users' heads into their heads. The fakes flew quickly, the company closed the forum a few months later, but the idea was in the world.

The cost of video manipulation drops drastically. What used to be specialists from Hollywood studios can now be put together on powerful PCs. Programs needed for this are partly available for free. Who still has to practice, can look at a tutorial on Youtube. Kaan Sahin of the German Council on Foreign Relations (DGAP) therefore speaks of a “democratization of disinformation”.

This progress is a consequence of the AI ​​revolution of recent years. Although research into deep learning has been going on for decades, the computer scientist Hao Shen, who heads the machine learning laboratory at Fortiss, a research institute in the Free State of Bavaria, explains: “But now the foundations are in place.”

In addition to the algorithms, some of which researchers developed decades ago on paper, today there are the gigantic amounts of data needed to train the systems and the massive computing power needed to handle the work. Suddenly, applications that were once considered science fiction become reality: from poker programs that beat pros to cars that drive on their own.

The same applies to the editing of videos and audio files. “Deepfake is ultimately a special application of deep learning,” says Shen. Again, algorithms learn autonomously by searching for rules and patterns in large amounts of data.

Example Video: The software first derives what features of the face are important – eyes and ears, laugh lines and frowns. The computer does this on the basis of the examples independently, without a person defining which components are important.

Then it is possible to transfer certain facial expressions to another person. The result was a video by ex-US President Barack Obama, who scolds his successor Donald Trump. Or a video of FacebookFounder Mark Zuckerberg, who babbles of total control. Or one of Nicolas Cage, whose face appears in blockbusters like “Indiana Jones” and “James Bond 007”.

One still sees the difference between original and forgery: The actors in Deepfake videos are mostly artificial, the mouth is strangely open, the head movements are stiff. Intuitively, many people realize that something is wrong. But that's a snapshot. “In the near future, people will no longer be able to distinguish a real from a fake image,” Shen says. “Presumably we will lose the fight against technology.”

For such systems, there are legitimate applications, as researchers always emphasize in their publications. One example is the media industry: movies are becoming even more realistic, and studios and broadcasters can translate the facial expressions of real actors into digital avatars. Or mend mistakes in the shooting, without it striking.

Other areas benefit from the simulation. “We can use the technology to create realistic scenarios in which autonomous vehicles virtually train,” explains Shen. The system learns to deal with unexpected situations without risking lives on the street. There are also potential applications in medicine: “We can simulate how rare cancers develop, thus improving diagnosis and treatment,” explains the engineer.

But it does not stay that way. “It's a great technology,” says Shen. The problem is that she invites abuse “and governments do not do anything about it”. The US, in particular, is sensitized to the danger – Russian interference in the presidential election campaign in 2016 still preoccupies American policy. The opponents of US President Donald Trump fear that Russia could intervene in the election campaign again in 2020 – and this time not with relatively primitive Facebook and Twitter messages, but the attempt to decide with Deepfakes the choice in Trump's favor.

(tagToTranslate) Digital Revolution (t) Deepfakes (t) Barack Obama (t) Nancy Pelosi (t) USA (t) Russia (t) Donald Trump (t) Varoufakis (t) Jan Böhmermann (t) Facebook (t) Twitter (t) Euler Hermes (t) Porn (t) Porn Videos (t) Artificial Intelligence AI (t) YouTube (t) Fortiss (t) FDP (t) Microsoft (t) Donald Trump (t) Barack Obama (t) Jan Böhmermann (t) Nancy Pelosi


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.