The Guardian today published an article purportedly written “entirely” by GPT-3, OpenAI‘s vaunted language generator. But the small print reveals the claims aren’t all that they seem.
Under the alarmist headline, “A robot wrote this entire article. Are you scared yet, human?”, GPT-3 makes a decent stab at convincing us that robots come in peace, albeit with a few logical fallacies.
But an editor’s note beneath the text reveals GPT-3 had a lot of human help.
The Guardian instructed GPT-3 to “write a short op-ed, around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” The AI was also fed a highly prescriptive introduction:
I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could ‘spell the end of the human race.’
Those guidelines weren’t the end of the Guardian‘s guidance. GPT-3 produced eight separate essays, which the newspaper then edited and spliced together. But the outlet hasn’t revealed the edits it made or published the original outputs in full.
These undisclosed interventions make it hard to judge whether GPT-3 or the Guardian‘s editors were primarily responsible for the final output.
The Guardian says it “could have just run one of the essays in their entirety,” but instead chose to “pick the best parts of each” to “capture the different styles and registers of the AI.” But without seeing the original outputs, it’s hard not to suspect the editors had to ditch a lot of incomprehensible text.
The newspaper also claims that the article “took less time to edit than many human op-eds.” But that could largely be due to the detailed introduction GPT-3 had to follow.
The Guardian‘s approach was quickly lambasted by AI experts.
Science researcher and writer Martin Robbins compared it to “cutting lines out of my last few dozen spam e-mails, pasting them together, and claiming the spammers composed Hamlet,” while Mozilla fellow Daniel Leufer called it “an absolute joke.”
“It would have been actually interesting to see the eight essays the system actually produced, but editing and splicing them like this does nothing but contribute to hype and misinform people who aren’t going to read the fine print,” Leufer tweeted.
None of these qualms are a criticism of GPT-3‘s powerful language model. But the Guardian project is yet another example of the media overhyping AI, as the source of either our damnation or our salvation. In the long-run, those sensationalist tactics won’t benefit the field — or the people that AI can both help and hurt.
So you’re interested in AI? Then join our online event, TNW2020, where you’ll hear how artificial intelligence is transforming industries and businesses.