This article was published on August 21, 2019

[Best of 2019] This AI-powered text generator is the scariest thing I’ve ever seen — and you can try it


[Best of 2019] This AI-powered text generator is the scariest thing I’ve ever seen — and you can try it

OpenAI, a nonprofit focused on creating human-level artificial intelligence, just released an update to its GPT-2 text generator. I’m not being hyperbolic when I say that, after trying it, I’m legitimately terrified for the future of humanity if we don’t figure out a way to detect AI-generated content – and soon.

GPT-2 isn’t a killer robot and my fears aren’t that AI is going to rise up against us. I’m terrified of GPT-2 because it represents the kind of technology that evil humans are going to use to manipulate the population — and in my opinion that makes it more dangerous than any gun. Here’s how it works: you give it a prompt and it near-instantly spits out a bunch of words. What’s scary about it is that it works. It works incredibly well. Here’s a few examples from Twitter:

And, lest you think I’m using cherry-picked examples to illustrate a point, here’s some from prompts I entered myself (the words in bold are mine, the rest is all AI):

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Some of these examples are Turing Test-ready, and others feel like they’re about one more GPT-2 update away from being indistinguishable from human-created content. What’s important to understand here is that OpenAI didn’t invent some sort of super computer, or reinvent AI as we know it; it just made a really powerful model using state-of-the-art artificial intelligence technologies. I say just because this isn’t a one-off thing that will be difficult for organizations that didn’t just ink a one billion dollar deal with Microsoft to pull off.

Somebody’s already taken the trouble of putting GPT-2, with the the new-and-improved 774M model, online (AI engineer Adam King – @AdamKing on Twitter). You can see for yourself how easy it is to generate cohesive text on-demand using AI.

Don’t get me wrong, the majority of the time you click “generate” it spits out a bunch of garbage. I’m not sitting here with a shocked look on my face contemplating all the ways this technology could be used against us because I’m overestimating the threat of a web interface for an AI that’s borderline prestidigitation. I’m a cynic who’s changing his opinion after seeing legitimate evidence that the the human writing process can be emulated by an artificial intelligence at the push of a button.

Keep clicking “generate,” you’ll be surprised how few clicks it’ll take to reach some genuinely convincing text most of the time.

OpenAI has serious concerns when it comes to releasing these models into the wild. Six months ago it stirred up a bunch of controversy when it made the decision to launch GPT-2 with a staged release. A number of researchers in the AI community took objection to OpenAI’s withholding – in essence accusing the organization of belying its inception as a non-profit meant to release its work as open source.

Hell, I wrote an entire article about it that mocked the breathless media coverage of OpenAI’s decision not to release the whole model with the headline “Who’s afraid of OpenAI’s big, bad text generator?” But this release is different. This one works almost good enough to use as a general artificial intelligence for text generation – almost. And, chances are, the 774M model won’t be the last. What’s this thing going to be capable of at double that, or triple?

I’ll just put this right here for context (from OpenAI’s blog post announcing the release of the new-and-improved GPT-2 model):

Detection isn’t simple. In practice, we expect detectors to need to detect a significant fraction of generations with very few false positives. Malicious actors may use a variety of sampling techniques (including rejection sampling) or fine-tune models to evade detection methods. A deployed system likely needs to be highly accurate (99.9%–99.99%) on a variety of generations. Our research suggests that current ML-based methods only achieve low to mid–90s accuracy, and that fine-tuning the language models decreases accuracy further. There are promising paths forward (see especially those advocated by the developers of “GROVER”) but it’s a genuinely difficult research problem. We believe that statistical detection of text needs to be supplemented with human judgment and metadata related to the text in order to effectively combat misuse of language models.

It won’t be long before AI-generated media – to include audio, video, text, and combinations of all three – are entirely indistinguishable from that created by humans. If we can’t find a way to distinguish between the two, tools like GPT-2 – in combination with the malicious intent of bad actors – will simply become weapons of oppression.

OpenAI was right to delay the release of GPT-2 six months ago, and it’s right to release the new model now. We won’t be able to figure out how to break it unless we let the AI community at-large take a crack at it. My hat’s all the way off to policy director Jack Clark and the rest of the team at OpenAI. The entire human race needs to proceed with caution when it comes to AI research going forward.

 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with