
A Google developer recently decided that one of the companyâs chatbots, a large language model (LLM) called LaMBDA, had become sentient.
According to a report in the Washington Post, the developer identifies as a Christian and he believes that the machine has something akin to a soul â that itâs become sentient.
As is always the case, the âis it alive?â nonsense has lit up the news cycle â itâs a juicy story whether youâre imagining what it might be like if the dev was right or dunking on them for being so silly.
We donât want to dunk on anyone here at Neural, but itâs flat out dangerous to put these kinds of ideas in peopleâs heads.
The more we, as a society, pretend that weâre âthiiiis closeâ to creating sentient machines, the easier itâll be for bad actors, big tech, and snake oil startups to manipulate us with false claims about machine learning systems.
The burden of proof should be on the people making the claims. But what should that proof look like? If a chatbot says âIâm sentient,â who gets to decide if it really is or not?
google engineer: are you sure you're sentient?
AI: yes i am sure
google engineer [turning to the rest of the team]: case closed folks
â the hype (@TheHyyyype) June 12, 2022
I say itâs simple, we donât have to trust any single person or group to define sentience for us. We can actually use some extremely basic critical thinking to sort it out for ourselves.
We can define a sentient being as an entity that is aware of its own existence and is affected by that knowledge: something that has feelings.
That means a sentient AI âagentâ must be capable of demonstrating three things: agency, perspective, and motivation.
Agency
For humans to be considered sentient, sapient, and self-aware, we must possess agency. If you can imagine someone in a persistent vegetative state, you can visualize a human without agency.
Human agency combines two specific factors which developers and AI enthusiasts should endeavor to understand: the ability to act and the ability to demonstrate causal reasoning.
Current AI systems lack agency. AI cannot act unless prompted and it cannot explain its actions because theyâre the result of predefined algorithms being executed by an external force.
The AI expert from Google who, evidently, has come to believe that LaMBDA has become sentient has almost certainly confused embodiment for agency.
Embodiment, in this context, refers to the ability for an agent to inhabit a subject other than itself. If I record my voice to a playback device, and then hide that device inside of a stuffed animal and press play, Iâve embodied the stuffy. I have not made it sentient.
If we give the stuffy its own unique voice and we make the tape recorder even harder to find, it still isnât sentient. Weâve just made the illusion better. No matter how confused an observer might become, the stuffed animal isnât really acting on its own.
Getting LaMBDA to respond to a prompt demonstrates something that appears to be action, but AI systems are no more capable of deciding what text they will output than a Teddy Ruxpin toy is able to decide which cassette tapes to play.
If you give LaMBDA a database made up of social media posts, Reddit, and Wikipedia, itâs going to output the kind of text one might find in those places.
And if you train LaMBDA exclusively on My Little Pony wikis and scripts, itâs going to output the kind of text one might find in those places.
AI systems canât act with agency, all they can do is imitate it. Another way of putting this is: you get out what you put in, nothing more.
Perspective
This oneâs a bit easier to understand. You can only ever view reality from your unique perspective. We can practice empathy, but you canât truly know what it feels like to be me, and vice versa.
Thatâs why perspective is necessary for agency; itâs part of how we define our âself.â
LaMBDA, GPT-3, and every other AI in the world lack any sort of perspective. Because they have no agency, there is no single âitâ that you can point to and say, for example: thatâs where LaMBDA lives.
If you put LaMBDA inside a robot, it would still be a chatbot. It has no perspective, no means by which to think ânow I am a robot.â It cannot act as a robot for the exact same reason a scientific calculator canât write poetry: itâs a narrow computer system that was programmed to do something specific.
If we want LaMBDA to function as a robot, weâd have to combine it with more narrow AI systems.
Doing so would be just like taping two Teddy Ruxpins together. They wouldnât combine to become one Mega Teddy Ruxpin whose twin cassette players merged into a single voice. Youâd still just have two specific, distinct models running near each other.
And, if you tape a trillion or so Teddy Ruxpins together and fill them each with a different cassette tape, then create an algorithm capable of searching through all the audio files in a relatively short period of time and associating the data contained in each file with a specific query to generate bespoke outputs⊠you will have created an analog version of GPT-3 or LaMBDA.
Whether weâre talking about toys or LLMs, when we imagine them being sentient weâre still talking about stitching together a bunch of mundane stuff and acting like the magic spark of provenance has brought it to life like the Blue Fairy turning wood, paint, and cloth into a real boy named Pinocchio.
In your transcript, Lambda response to one question "Spending time with friends and family in happy and uplifting company" but you haven't asked it, "who is your family." If you still have access, I'd be interesting in hearing the answer to this.
â Richard Alleman (@allemanr) June 14, 2022
The developer who got fooled so easily should have seen that chatbotâs assertion that it âenjoyed spending time with friends and familyâ as their first clue that the machine wasnât sentient. The machine isnât displaying its perspective, itâs just outputting nonsense for us to interpret.
Critical thinking should tell us as much: how can an AI have friends and family?
AIâs arenât computers. They donât have networking cards, RAM, processors, or cooling fans. Theyâre not physical entities. They canât just âdecideâ to check out whatâs on the internet or search other nodes connected to the same cloud. They canât look around and discover theyâre all alone in a lab or on a hard drive somewhere.
Do you think numbers have feelings? Does the number five have an opinion on the letter D? Would that change if we smashed trillions of numbers and letters together?
AI doesnât have agency. It can be reduced to numbers and symbols. It isnât a robot or a computer anymore than a bus or airplane full of passengers is a person.
Motivation
The final piece of the sentience puzzle is motivation.
We have an innate sense of presence that allows us to predict causal outcomes incredibly well. This creates our worldview and allows us to associate our existence in relation to everything that appears external to the position of agency from which our perspective manifests.
However, whatâs interesting about humans is that our motivations can manipulate our perceptions. For this reason, we can explain our actions even when they arenât rational. And we can actively and gleefully participate in being fooled.
Take, for example, the act of being entertained. Imagine sitting down to watch a movie on a new television thatâs much bigger than your old one.
At first, you might be a little distracted by the new tech. The differences between it and your old TV are likely to draw your eye. You might be blown away by the image clarity or taken aback by how much space the huge screen takes up in the room.
But eventually youâre likely to stop perceiving the screen. Our brains are designed to fixate on the things we think are important. And, by the 10 or 15 minute mark of your film experience, youâll probably just be focused on the movie itself.
When weâre in front of the TV to be entertained itâs in our best interests to suspend our disbelief, even though we know the little people on the screen arenât actually in our living room.
Itâs the same with AI devs. They shouldnât be judging the efficacy of an AI system based on how gullible they are to the way the product works.
When the algorithms and databases start to fade away in a developerâs mind like the television screen a movieâs playing on, itâs time to take a break and reassess your core beliefs.
It doesnât matter how interesting the output is when you understand how itâs created. Another way of saying that: donât get high off your own supply.
exactly why i mentioned pareidolia
â Gary Marcus ?? (@GaryMarcus) June 14, 2022
GPT-3 and LaMBDA are complex to create, but they operate on a single stupidly simple principle: labels are god.
If we give LaMBDA a prompt such as âwhat do apples taste like?â it will search its database for that particular query and attempt to amalgamate everything it finds into something coherent â thatâs where the âparametersâ weâre always reading about come in, theyâre essentially trillions of tuning knobs.
But in reality the AI has no concept of what an apple or anything else actually is. It has no agency, perception, or motivation. An apple is just a label.
If we were to sneak into its database and replace all instances of âappleâ with âdogshit,â the AI would output sentences such as âdogshit makes a great pie!â or âmost people describe the taste of dogshit as being light, crispy, and sweet.â A rational person wouldnât confuse this prestidigitation for sentience.
Heck, you couldnât even fool a dog with the same trick. If you put dogshit in a food bowl and told Fido it was supper time, the dog wouldnât confuse it for kibble.
A sentient creature can navigate reality even if we change the labels. The first English speaker to ever meet a French speaker didnât suddenly think it was okay to stick their arm in a French fire because they called it a âfeu.â
Without agency, an AI cannot have perspective. And without perspective it canât have motivation. And without all three of those things, it cannot be sentient.
Get the TNW newsletter
Get the most important tech news in your inbox each week.