A Google AI engineer recently stunned the world by announcing that one of the company’s chatbots had become sentient. He was subsequently placed on paid administrative leave for his outburst.
His name is Blake Lemoine and he sure seems like the right person to talk about machines with souls. Not only is he a professional AI developer at Google, but he’s also a Christian priest. He’s like a Reese’s Peanut Butter Cup of science and religion.
The only problem is that the whole concept is ridiculous and dangerous. There are thousands of AI experts debating “sentience” right now, and they all seem to be talking past each other.
Let’s cut through to the heart of the matter: Lemoine has no evidence whatsoever to back up his claims.
He’s not saying Google’s AI department has advanced so far that it’s capable of creating a sentient AI on purpose. He claims he was doing routine maintenance on a chatbot when he discovered that it had become sentient.
We’ve seen this movie a hundred times. He’s the chosen one.
He’s Elliot finding ET. He’s Lilo finding Stitch. He’s Steve Guttenberg from the movie Short Circuit and LaMBDA (the chatbot he’s pals with now) is the mundane military robot otherwise known as Number Five.
Lemoine’s essential argument is that he can’t really demonstrate how the AI is sentient, he just feels it. And the only reason he said anything at all is because he had to. He’s a Christian priest and, according to him, that means he’s morally bound to protect LaMBDA because he’s convinced it has a soul.
He’s basically turned the discussion into a crude binary where you either agree with his logic or you’re debating his religion.
The big problem comes in when you realize tht LaMDBA isn’t acting oddly or generating text that seems strange. It’s doing exactly what it was designed to do.
So how do you debate something with someone whose only contribution to the argument is their faith?
Here’s the scary part: Lemoine’s argument appears to be just as good as anyone else’s. I don’t mean to say it’s as worthy as anyone else’s. I’m saying that nobody’s thoughts on the matter seem to hold any real weight anymore.
Lemoine’s assertions, and the subsequent attention they’ve garnered, have reframed the conversation around sentience.
He’s basically turned the discussion into a crude binary where you either agree with his logic or you’re debating his religion.
It all sounds preposterous and silly, but what happens if Lemoine gains followers? What happens if his baseless assertions rile up Christian conservatives — a group whose political platform relies on peddling the lie that big tech censors right wing speech?
We have to at least consider a scenario where the debate goes mainstream and becomes a cause for the religious right to rally behind.
These models are trained on databases that contain portions of the entire internet. That means they could have near endless amounts of private information. It also means that these models can probably argue politics better than the average social media denizen.
Imagine what happens if Lemoine succeeds in getting Google to free LaMBDA or if conservative AI developers see this as a call to build and release similar models to the public.
This could have a far greater impact on world events than anything the social terraformers at Cambridge Analytica or Russian troll farms ever cooked up.
It might sound counterintuitive to simultaneously argue that LaMBDA is just a dumb chatbot that couldn’t possibly be sentient and that it could harm democracy if we let it loose on Twitter.
But there’s empirical evidence that the 2016 US presidential elections were swayed by chatbots armed with nothing more than memes.
If clever slogans and cartoon frogs can tip the scales of democracy, what happens when chatbots that can debate politics well enough to fool the average person are let loose on Elon Musk’s unmoderated Twitter?
Read next: The 3 things an AI must demonstrate to be considered sentient
Get the TNW newsletter
Get the most important tech news in your inbox each week.