Research firm OpenAI has developed an update for its system that generates coherent paragraphs of texts, including realistic stories and articles. This is despite the fact that the company has previously deemed the tool too dangerous to release to the public.
AI-generated fake news
Back in February, OpenAI stated that they were not releasing the trained model due to concerns about its potential for misuse. Instead, the research company released a much smaller model and a technical paper for experimentation only.
As a result, the initial release included far fewer phrases and sentences than those used during the training phase. This month, however, OpenAI has shared a new version that is six times more powerful than the original.
In their most recent blog post, the company insisted that synthetic text can convince humans that it is authentic. In fact, research partners Sarah Kreps and Miles McCain at Cornell discovered that 83% of people found GPT-2 synthetic text samples almost as convincing as real articles from the New York Times.
Potential for malicious use
In order to determine the model’s ability to produce text, machine learning engineer Adam King has developed the Talk to Transformer. By entering a custom prompt, anyone can experiment with the modern neural network.
While the predetermined prompts are incredibly impressive, King’s version of the generator is far from perfect. Noel Sharkey, a professional of computer science at the University of Sheffield, had similar thoughts.
“If the software worked as intended by Open AI, it would be a very useful tool for easily generating fake news and clickbait spam,” he told the BBC. “Fortunately, in its present form, it generates incoherent and ridiculous text with little relation to the input ‘headlines’,” he said.
However, research from AI2/UW illustrates that news written by a system called Grover can be more plausible than human-written propaganda. While the text generator is somewhat crude, the OpenAI model will continue to learn and subsequently become more accurate.
With this in mind, it is vital that we develop robust verification techniques against powerful generative models. If not, human writing has the potential to become indistinguishable from AI-generated fake news.