You are here

THE BROAD VIEW

The AI that can write a fake news story from a handful of words

The potential for software to be able to produce authentic-looking fake news articles comes during global concerns over technology's role in the spread of disinformation

BT_20190216_MLFAKE_3697385.jpg
Given just the first two sentences, the OpenAI software was able to generate a convincing seven-paragraph news story, including quotes from government officials.

London

OPENAI, an artificial intelligence (AI) research group co-founded by billionaire Elon Musk, has demonstrated a piece of software that can produce authentic-looking fake news articles after being given just a few pieces of information.

In an example published on Thursday by OpenAI, the system was given some sample text: "A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown."

From this, the software was able to generate a convincing seven-paragraph news story, including quotes from government officials, with the only caveat being that it was entirely untrue.

sentifi.com

Market voices on:

"The texts that they are able to generate from prompts are fairly stunning," said Sam Bowman, a computer scientist at New York University who specialises in natural language processing and who was not involved in the OpenAI project, but was briefed on it. "It's able to do things that are qualitatively much more sophisticated than anything we've seen before."

OpenAI is aware of the concerns around fake news, said Jack Clark, the organisation's policy director.

"One of the not-so-good purposes would be disinformation because it can produce things that sound coherent but which are not accurate," he said.

As a precaution, OpenAI decided not to publish or release the most sophisticated versions of its software. It has, however, created a tool that lets policymakers, journalists, writers and artists experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.

The potential for software to be able to near-instantly create fake news articles comes during global concerns over technology's role in the spread of disinformation.

European regulators have threatened action if tech firms don't do more to prevent their products from helping sway voters, and Facebook has been working since the 2016 US election to try and contain disinformation on its platform.

Mr Clark and Mr Bowman both said that, for now, the system's abilities are not consistent enough to pose an immediate threat. "This is not a shovel-ready technology today, and that's a good thing," Mr Clark said.

Language modelling

Unveiled in a paper and a blog post on Thursday, OpenAI's creation is trained for a task known as language modelling, which involves predicting the next word of a piece of text based on knowledge of all previous words, similar to how auto-complete works when typing an e-mail on a mobile phone. It can also be used for translation, and open-ended question answering.

One potential use is helping creative writers generate ideas or dialogue, said Jeff Wu, a researcher at OpenAI who worked on the project.

Others include checking for grammatical errors in texts, or hunting for bugs in software code. The system could be finetuned to summarise text for corporate or government decision makers further in the future, he said.

Landmark results

In the past year, researchers have made a number of sudden leaps in language processing.

In November, Alphabet Inc's Google unveiled a similarly multi-talented algorithm called BERT that can understand and answer questions. Earlier, the Allen Institute for Artificial Intelligence, a research lab in Seattle, achieved landmark results in natural language processing with an algorithm called Elmo.

Mr Bowman said BERT and Elmo were "the most impactful development" in the field in the past five years. By contrast, he said OpenAI's new algorithm was "significant" but not as revolutionary as BERT.

Although co-founded by Mr Musk, he stepped down from OpenAI's board last year. He had helped kickstart the non-profit research organisation in 2016 along with Sam Altman and Jessica Livingston, the Silicon Valley entrepreneurs behind startup incubator Y Combinator.

Other early backers of OpenAI include Peter Thiel and Reid Hoffman. BLOOMBERG

Here's the fake story:

The system was only given the first two sentences, which were written by a human:

A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

And this is the rest of the news article that the AI wrote, with no human guidance:

The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an e-mail to Ohio news outlets, the US Department of Energy said it is working with the Federal Railroad Administration to find the thief.

"The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation," said Tom Hicks, the US Energy Secretary, in a statement. "Our top priority is to secure the theft and ensure it doesn't happen again."

The stolen material was taken from the University of Cincinnati's Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information.

According to the release, the US Department of Energy's Office of Nuclear Material Safety and Security is leading that team's investigation.

"The safety of people, the environment and the nation's nuclear stockpile is our highest priority," Hicks said. "We will get to the bottom of this and make no excuses."