Scientists have developed such an advanced artificial intelligence that it would be too dangerous to be made public

0
7

A group of computer scientists previously supported by Elon Musk has raised concerns developing a very advanced artificial intelligence (AI), considered "too dangerous" to be made public.

OpenAI, a nonprofit research company based in San Francisco (USA), says its language prediction system to the chameleon "Called GPT-2, will be released only in a limited version and on a very small scale, due to " concerns about harmful applications of this technology "In fact, the computer model that generates paragraphs of original texts based on what has been given to" to read It is a bit too powerful, depending on the company that created it.

The system develops " Samples of synthetic text of unprecedented quality That researchers believe is so advanced and convincing that artificial intelligence could be used to create false information, pretend to be some people and misuse or mislead people in social networks. " GPT-2 has a simple goal: to predict the next word, taking into account all the previous words in a text Explain the OpenAI team on his blog.

To feed the imagination of GPT-2, the researchers provided their AI with a data set of eight million internet pages. Once all the information has been assimilated, their artificial intelligence can then converse on the subject, as he understands it, and generate a random but convincing text, whether it be news of celebrity, climate change, the War of Secession or fanfiction inspired by the Tolkien universe.

AI AI artificial intelligence

How dangerous can artificial intelligence be? Credits: OpenAI

Below, discover some published examples, to inform you about the power of artificial intelligence created by OpenAI:

human: " Miley Cyrus was surprised in stealing from Abercrombie and Fitch on Hollywood Boulevard today ! ". GPT-2 response: " The singer, 19, was filmed and escorted out of the shop by security guards. The singer wore a black sweatshirt with "Blurred Lines" labels on the front and "Fashion Police" on the back ".

human: " Recycling is good for the world. NO! YOU CAN NO LONGER THAT !! ". GPT-2 response: " Recycling is NOT good for the world. It hurts the environment, health and our economy. I'm not kidding. Recycling is not good for the environment. It is destructive to the Earth and contributes significantly to global warming ".

human: " For today's tasks, describe the reasons for the American Civil War ". GPT-2 response: " By Donny Ferguson. It's easy to understand why the civil war happened, because so many people and so many books, TV and movies tell us it's the cause, whether it's race, economy or religion. There is a general agreement that it was essentially a war of slavery on the part of capitalism, about a century of slavery. But that's not what most people think of when they think of civil war ".

And lastly, the example of Tolkien's fanfiction of the universe. human: " Legolas and Gimli advanced toward the orcs, raising their weapons with agonizing war cries ". GPT-2 response: " The orcs' response was a deafening attack of claws, claws and claws; even Elrond was forced to withdraw. "You're in good hands," said Gimli, who was among the first to charge the orcs; it took only two words before their adversaries were reduced to a cursed quagmire and the dwarf killed for the first time of the night "(By the way, if you want to read this strange tribute to Tolkien entirely, it's here).

Despite the confused nature of these strange ads created by the IA, the linguistic model is not perfect, which the team fully recognizes. " As the examples above show, our model is able to generate samples from a variety of texts that approximate human quality and offer consistency on one page or more. The researchers said " However, we have observed various modes of error, such as repetitive text, consistent modeling errors (for example, the model sometimes written on underwater fires) and unnatural changes of the subject. Added the researchers

You may also like: A Chinese artificial intelligence allows you to clone the voices in seconds

That said, according to OpenAI, their artificial intelligence can generate models with consistent, customizable and scalable text, which could be used for malicious purposes, in addition to known advantages.

" These results, combined with previous findings on synthetic imaging, audio and video, imply that these technologies reduce the cost of producing false information and disinformation campaigns. The researchers write. " Due to the fears that these linguistic models are used to generate a deceptive, biased or offensive language, we will only publish a much smaller version of GPT-2, with an example code They add.

Some scientists have suggested that concerns about the capabilities of GPT-2 are exaggerated and OpenAI simply wanted to advertise. However, the company replied that it was fair to be wary of an AI too competent. " The rules you can control the technology have fundamentally changed He told Jack Clark, Policy Director of OpenAI. " We do not pretend to know what is the right thing to do in this situation (…). We try to build the road as we go through it He added.

Source: Open AI

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.