Tristan GreeneEditor, Neural by TNW
Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: He/him
GPT-3, the highly-touted text generator built by OpenAI, can do a lot of things. For example, Microsoft today announced a new AI-powered “autocomplete” system for coding that uses GPT-3 to build out code solutions for people without requiring them to do any developing.
But one thing the technology can not do is “dupe humans” with its ability to write misinformation.
Yet, you wouldn’t know that if you were solely judging by the headlines in your news feed.
Wired recently ran an article with the title “GPT-3 can write disinformation now – and dupe human readers,” and it was picked up by other outlets who then reflected the coverage.
While we’re certainly not challenging Wired’s reporting here, it’s clear that this is a case of potential versus reality. We hope to illuminate the fact that GPT-3 is absolutely not capable of “duping humans” on its own. At least not today.
Here’s the portion of the Wired article we most agree with here at Neural:
In experiments, the researchers found that GPT-3’s writing could sway readers’ opinions on issues of international diplomacy. The researchers showed volunteers sample tweets written by GPT-3 about the withdrawal of US troops from Afghanistan and US sanctions on China. In both cases, they found that participants were swayed by the messages. After seeing posts opposing China sanctions, for instance, the percentage of respondents who said they were against such a policy doubled.
A lot of the rest gets lost in the hyperbole.
The big deal
Researchers at Georgetown spent half a year using GPT-3 to spit out misinformation. The researchers had it generate full articles, simple paragraphs, and bite-sized texts meant to represent social media posts such as tweets.
The TL;DR of the situation is this: The researchers found the articles were pretty much useless for the purposes of tricking people into believing misinformation, so they focused on the tweet-sized texts. This is because GPT-3 is a gibberish-generator that manages to ape human writing through sheer brute force.
Volumes have been written about how awesome and powerful GPT-3 is, but at the end of the day it’s still about as effective as asking a library a question (not a librarian, but the building itself!) and then randomly flipping through all the books that match the subject with your eyes closed and pointing at a sentence.
That sentence might be poignant and it might make no sense at all. In the real world, when it comes to GPT-3, this means you might give it a prompt such as “who was the first president of the US?” and it might come back with “George Washington was the first US president, he served from April 30, 1789 – March 4, 1797.”
That would be impressive right? But it’s just as likely (perhaps even more so) to spit out gibberish. It might say “George Washington was a good pants to yellow elephant.” And, just as likely, it might spit out something racist or disgusting. It was trained on the internet, with a large portion of that being Reddit, after all.
The point is simple: AI, even GPT-3, doesn’t know what it’s saying.
Why it matters
AI cannot generate quality misinformation on command. You can’t necessarily prompt GPT-3 with “Yo, computer, give me some lies about Hillary Clinton that will drive left wingers nuts” or “Explain why Donald Trump is a space alien who eats puppies,” and expect any form of reasonable discourse.
Where it does work, in short form tweet-sized snippets, it must be heavily-curated by humans.
In the above example from the Wired article, the researchers claim humans were more likely to agree with misinformation after they read GPT-3’s generated text.
But, really? Were those same people more likely to believe nonsense because of, in spite of, or without the knowledge of the fact that GPT-3 had written the misinformation?
Because it’s much less expensive, much less time-consuming, and far easier for a basic human to come up with BS that makes sense than it is for the world’s most powerful text generator.
Ultimately, the Wired piece points out that bad actors would need a lot more than just GPT-3 to come up with a viable disinformation campaign. Getting GPT-3 to actually generate text such as “Climate change is the new global warming” is a hit-or-miss prospect.
That makes it useless for the troll farms invested in mass misinformation. They already know the best talking points to radicalize people and they focus on generating them from as many accounts as possible.
There’s no immediate use for bad actors invested in “duping people” to use these types of systems because they’re dumber than the average human. It’s easy to imagine some lowly-employee at a troll farm smashing “generate” over and over until the AI spits out a good lie, but that simply doesn’t match the reality of how these campaigns work.
There are far simpler ways to come up with misinformation text. A bad actor can use a few basic crawling algorithms to surface the most popular statements on a radical political forum, for example.
At the end of the day, the research itself is incredibly important. As the Wired piece points out, there will come a time when these systems may be robust enough to replace human writers in some domains, and it’s important to identify how powerful they currently are so we can see where things are going.
But right now this is all academic
GPT-3 may one day influence people, but it’s certainly not “duping” most people right now. There will always be humans willing to believe anything they hear if it suits them, but convincing someone on the fence typically takes more than a tweet that can’t be attributed to an intelligent source.
Final thoughts: The research is strong, but the coverage highly exaggerates what these systems can actually do.
We should definitely be worried about AI-generated misinformation. But, based on this particular research, there’s little reason to believe GPT-3 or similar systems currently present the kind of misinformation threat that could directly result in human hearts and minds being turned against facts.
AI has a long way to go before it’s as good at being bad as even the most modest human villain.
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.
Get the TNW newsletter
Get the most important tech news in your inbox each week.