An AI that was deemed too dangerous to be released has now been released into the world.
Researchers had feared that the model, known as “GPT-2”, was so powerful that it could be maliciously misused by everyone from politicians to scammers.
GPT-2 was created for a simple purpose: it can be fed a piece of text, and is able to predict the words that will come next. By doing so, it is able to create long strings or writing that are largely indistinguishable from those written by a human being. But it became clear that it was worryingly good at that job, with its text creation so powerful that it could be used to scam people and may undermine trust in the things we read.
What 's more, the model can be abused by extremist groups to create “synthetic propaganda ” that would allow them to automatically generate long text promoting white supremacy or jihadist Islamis, for instance.
Russia has launched a humanoid robot into space on a rocket bound for the International Space Station (ISS). The robot Fedor will spend 10 days aboard the ISS practicing skills such as using tools to fix issues onboard. Russia's Deputy Prime Minister Dmitry Rogozin has previously shared videos or Fedor handling and shooting guns at a firing range with deadly accuracy.
“Due to our concerns about malicious applications of the technology, we are not releasing the trained model,” OpenAI wrote in a February blog post, released when it made the announcement. “As an experiment in responsible disclosure, we are releasing a much narrower model for researchers to experiment with, as well as a technical paper.”
At that time, the organization released only a very limited version of the tool, which used 124 million parameters. It has released more complex versions ever since, and has now made the full version available.
The full version is more convincing than the narrower one, but only “marginally “. The relatively limited increase in credibility was part of what encouraged the researchers to make it available, they said.
It hopes that the release can partly help the public understand how such a tool could be misused and help inform discussions among experts about how that danger can be mitigated.
In February, researchers said that there was a variety of ways that malicious people could misuse the program. The outputted text could be used to create misleading news articles, impersonate other people, automatically create abusive or fake content for social media or to use to spam people with – along with a variety of possible uses that might not have been imagined yet, they noted.
Such misuses would require the public to become more critical about the text they read online, which could have been generated by artificial intelligence, they said.
“These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns,” they wrote. “The public at large will need to become more skeptical or text they find online, just as the” deep fakes “phenomenon calls for more skepticism about images.”
The researchers said that experts needed to work into consideration “how to research into the generation of synthetic images, videos, audio, and text may further combine to unlock new as-yet-unanticipated capabilities for these actors, and should seek to create better technical and non-technical countermeasures “.
Social media is an increasingly important battle ground in elections – and home to many questionable claims pumped out by all sides. If social media sites won't investigate the truth or divisive advertising, we will. Please send any political Facebook advertising you receive to firstname.lastname@example.org, and we will catalog and investigate it. Read more here.
Source Independend: https://www.independent.co.uk/life-style/gadgets-and-tech/news/ai-artificial-intelligence-dangerous-text-gpt2-elon-musk-a9192121.htmlTags: AI, machine learning, ml, Nederland, nieuws