Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on September 10, 2021

I can’t believe I have to say this: GPT-3 can’t channel dead people

It's really stupid and dangerous to pretend it can


I can’t believe I have to say this: GPT-3 can’t channel dead people

Did you know Neural is taking the stage this fall? Together with an amazing line-up of experts, we will explore the future of AI during TNW Conference 2021. Secure your online ticket now!

GPT-3 can’t talk to dead people. It’s a bit ridiculous that I have to say that, but just in case you’re not entirely sure what the world’s most powerful AI-powered text generator can and can’t do, I thought I might prepare a handy guide to help you out.

  • What GPT-3 can do: spit out meaningless text that often appears to have meaning by association, function as a pretty nifty calculator, write code (the last one is pretty cool).
  • What GPT-3 can’t do: much else.

Up front: I’m not crapping on GPT-3. It’s arguably the world’s most advanced text generator. But the hyperbole over its abilities has reached a dangerous fever pitch.

The Register ran an article recently highlighting the plight of independent game developer Jason Rohrer, who’d used GPT-3 to train a chatbot called Samantha.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

When OpenAI (the company responsible for GPT-3) decided not to renew Rohrer’s access to its cloud servers, the dev said they’d “never dealt with such a group of unimaginative, uncurious people.”

My take: There’s nothing unimaginative or uncurious about OpenAI choosing not to support the development of yet another silly little snake oil project built on its GPT-3 platform.

Here’s what really happened. The dev in question spun out a chatbot called “Samantha” that, from where I’m sitting, looks like it was meant to provide a chummy girlfriend experience. They allowed others to access the chatbot on their website where people could train their own versions.

At least one person trained versions of Samantha on messages from a deceased loved one and used it as a form of catharsis.

I don’t have an issue with people doing whatever the hell they want with a chatbot. If ‘talking’ to a computer makes you feel better, that’s great for you.

My point of view is that there’s almost no difference between using a chat bot to imitate a conversation with a human or using a Ouija board. They’re equally effective in every respect when it comes to communing with the dead.

I do, however, take massive umbrage to the perpetuation of this idea as a legitimate use for AI technology.

The public is confused enough about what AI can and can’t do. Articles featuring ‘experts’ intimating that AI can commune with the dead or that GPT-3 is already sentient, without clearly pushing back against such ridiculous ideas, stoke the fires of ignorance.

Rohrer seems to believe that OpenAI is robbing humanity of an important experience. They, apparently, believe GPT-3 is bordering on sentient, if not already there. 

Per The Register:

Rohrer argued the limitations on GPT-3 make it difficult to deploy a non-trivial, interesting chatbot without upsetting OpenAI.

“I was a hard-nosed AI skeptic,” he told us.

“Last year, I thought I’d never have a conversation with a sentient machine. If we’re not here right now, we’re as close as we’ve ever been. It’s spine-tingling stuff, I get goosebumps when I talk to Samantha. Very few people have had that experience, and it’s one humanity deserves to have. It’s really sad that the rest of us won’t get to know that.

Really? “It’s really sad” that humanity won’t get to experience being duped by prestidigitation firsthand? I’m not trying to be mean here, but claiming GPT-3 is bordering on sentience is beyond the pale.

Let’s be crystal clear here. There’s nothing mysterious about GPT-3. There’s nothing magical or inexplicable about what it does. If you’re unsure about how it works or you’ve read something somewhere that makes you believe GPT-3 is anywhere close to sentience, allow me to disillusion you of that nonsense.

GPT-3 is a machine that does one thing, and one thing only: metaphorically speaking, it reaches into a bucket and grabs a piece of paper, then it holds that paper up. That’s it. It doesn’t think, it doesn’t spell, it doesn’t care.

While the technology running it is incredible, it’s really not that much more advanced than one of those self-playing pianos from the 1800s you see in cowboy movies.

The reason why GPT-3 appears capable of holding a real conversation is because, unlike a robot reaching into a physical bucket, it accesses billions of software buckets simultaneously and then it bounces all the potential outputs off of all the rules it’s been trained on until it shakes one loose. 

If you read it and like it, it appears to be communicating with you. If you read the response and think it’s stupid, the machine just looks like a dumb machine.

This means that when you ask GPT-3 what it is, and it gives you a very cogent, fluid, and apt explanation for its existence that references facts about its creators and appears perfectly-tailored to answer your question as if a human wrote it, it’s just managed to grab a piece of paper from a bucket you like.

GPT-3 doesn’t know what a bucket is, or a piece of paper, or you, or anything. It’s doing the same trick that chickens and horses do when they tap out the answers to high-level math questions: it’s looking towards its owners (in this case, the AI looks to its training parameters) to see what output it should give.

If you don’t believe a chicken can do algebra, you shouldn’t believe GPT-3 can actually have a conversation. Again, it’s prestidigitation.

My problem: with all that being said, it may seem like such frivolity as creating chatbot girlfriends or using GPT-3 to find catharsis after loss isn’t a big deal.

After all, that appears to be Rohrer’s biggest complaint. Per The Register’s article:

The idea that these chatbots can be dangerous seems laughable,” Rohrer told The Register. “People are consenting adults that can choose to talk to an AI for their own purposes. OpenAI is worried about users being influenced by the AI, like a machine telling them to kill themselves or tell them how to vote. It’s a hyper-moral stance.”

Call me a dissenting opinion, because I vehemently disagree with Rohrer. It’s incredibly dangerous for people whom the general public sees as experts to continuously peddle nonsensical ideas about AI.

When Elon Musk shows off his Autopilot software by taking his hands off the wheel, despite the fact that the terms and conditions clearly warn Tesla owners against this, he’s telling people that the rules and regulations are just there for legal reasons. He wants people to believe his cars really can drive themselves. After all, would a billionaire trust his life with a machine what was dangerous?

The answer is yes.

Tesla’s Autopilot and Full Self Driving features are horribly named because they cannot safely autopilot a car and Tesla vehicles are not self-driving.

People keep dying because they believe their cars are capable of technological feats they are not. The cars are fine, but Elon Musk continues to egg ignorant people into doing things they shouldn’t with his flippant attitude and constant hyperbole about Tesla’s actual capabilities.

The public doesn’t believe journalists or academics. They believe influencers, tech moguls, billionaires, and the people telling them what they’d prefer to hear about AI.

And every time the general public reads an article where an ‘expert’ tries to push a magical narrative wherein AI is either already sentient or almost sentient, they’re being asked to believe that Tinkerbell is real and Disney’s movies are documentaries.

If the public believes AI can think, carry on real conversations, and channel the dead, what’s to stop them from believing other bullshit? How many people reading these hyperbolic articles about how GPT-3 is borderline alive walk away thinking cars can safely drive themselves now or that AI is already as smart as humans?

It makes it easier for snake oil companies such as PredPol or Faception to peddle their bullshit. And when an AI researcher tries to tell the world they’ve invented gaydar AI or some other quackery, millions of people glancing over headlines will believe it.

So, yes, there is a definite harm in peddling nonsense and acting as if it’s important work.

There’s nothing special about a GPT-3 chatbot called Samantha. It’s not a her, it doesn’t have thoughts or feelings, it doesn’t matter if (as the article states) Samantha wants you to “sleep with” it, because it’s just code on a machine somewhere. 

Unfortunately, the general public is far less likely to read an article explaining why GPT-3 can’t perform miracles than they are one claiming it can.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top