CYBER MONDAY WEEK 🤑 Get 30% off your TNW for Startups or Scaleups packages when you use code CYBER30 only until December 4 →

This article was published on February 22, 2022

Who gets to decide if an AI is alive?

D) None of the above


Who gets to decide if an AI is alive?

Experts predict artificial intelligence will gain sentience within the next 100 years. Some predict it’ll happen sooner. Others say it’ll never happen. Still other experts say it already has happened.

It’s possible the experts are just guessing.

The problem with identifying “sentience” and “consciousness” is there’s no precedent when it comes to machine intelligence. You can’t just check a robot’s pulse or ask it to define “love” to see if it’s alive.

The closest we have to a test for sentience is the Turing Test and, arguably, Alexa and Siri passed that years ago.

At some point, if and when AI does become sentient, we’ll need an empirical method for determining the difference between clever programming and machines that are actually self-aware.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Sentience and scientists

Any developer, marketing team, CEO, or scientist can claim they’ve created a machine that thinks and feels. There’s just one thing stopping them: the truth.

And that barrier’s only as strong as the consequences for breaking it. Currently, the companies dabbling at the edge of artificial general intelligence (AGI) have wisely stayed on the border of “it’s just a machine” without crossing into the land of “it can think.”

They use terms such as “human-level” and “strong AI” to indicate they’re working towards something that imitates human intelligence. But they usually stop short of claiming these systems are capable of experiencing thoughts and feelings.

Well, most of them anyway. Ilya Sutskever, the chief scientist at OpenAI, seems to think AI is already sentient:

But Yann LeCun, Facebook/Meta’s AI guru, believes the opposite:

And Judea Pearl, a Turing Award-winning computer scientist, thinks even fake sentience should be considered consciousness since, as he puts it, “faking it is having it.”

Here we have three of the world’s most famous computer scientists, each of them progenitors of modern artificial intelligence in their own right, debating consciousness on Twitter with the temerity and gravitas of a Star Wars versus Star Trek argument.

And this is not an isolated incident by any means. We’ve written about Twitter beefs and wacky arguments between AI experts for years.

It would appear that computer scientists are no more qualified to opine on machine sentience than philosophers are.

Living machines and their lawyers

If we can’t rely on OpenAI’s chief scientist to determine whether, for example, GPT-3 can think, then we’ll have to shift perspectives.

Perhaps a machine is only sentient if it can meet a simple set of rational qualifications for sentience. In which case we’d need to turn to the legal system in order to codify and verify any potential incidents of machine consciousness.

The problem is that there’s only one country with an existing legal framework by which the rights of a sentient machine can be discussed, and that’s Saudi Arabia.

As we reported back in 2017:

A robot called Sophia, made by Hong Kong company Hanson Robotics, was given citizenship during an investment event where plans to build a supercity full of robotic technology were unveiled to a crowd of wealthy attendees.

Let’s be perfectly clear here: if Sophia the Robot is sentient, so is Amazon’s Alexa, Teddy Ruxpin, and The Rockafire Explosion.

It’s an animatronic puppet that uses natural language processing AI to generate phrases. From an engineering point of view, the machine is quite impressive. But the AI powering it is no more sophisticated than the machine learning algorithms Netflix uses to try and figure out what TV show you’ll want to watch next.

In the US, the legal system consistently demonstrates an absolute failure to grasp even the most basic concepts related to artificial intelligence.

Last year, Judge Bruce Schroeder banned prosecutors from using the “pinch to zoom” feature of an Apple iPad in the Kyle Rittenhouse trial because nobody in the courtroom properly understood how it worked.

Per an article by Ars Technica’s Jon Brodkin:

Schroeder prevented … [Kenosha County prosecutor Thomas Binger] from pinching and zooming after Rittenhouse’s defense attorney Mark Richards claimed that when a user zooms in on a video, “Apple’s iPad programming creat[es] what it thinks is there, not what necessarily is there.”

Richards provided no evidence for this claim and admitted that he doesn’t understand how the pinch-to-zoom feature works, but the judge decided the burden was on the prosecution to prove that zooming in doesn’t add new images into the video.

And the US government remains staunch in its continuing hands-off approach to AI regulation.

It’s just as bad in the EU, where lawmakers are currently stymied over numerous sticking points including facial recognition regulations, with conservative and liberal party lines fueling the dissonance.

What this means is that we’re unlikely to see any court, in any democratic country, make rational observations on machine sentience.

Judges and lawyers often lack basic comprehension of the systems at play and scientists are too busy deciding where the goalposts for sentience lie to provide any sort of consistent view on the matter.

Currently, the utter confusion surrounding the field of AI has led to a paradigm where academia and peer-review act as the first and only arbiters of machine sentience. Unfortunately, that puts us back into the realm of scientists arguing over science.

That just leaves PR teams and the media. On the bright side, the artificial intelligence beat is quite competitive. And many of us on it are painfully aware of how hyperbolic the entire field has become since the advent of modern deep learning.

But the dark side is that intelligent voices of reason with expertise in the field they’re covering — the reporters with years of experience telling shit from Shinola and snake oil from AI — are often shouted over by access journalists with larger audiences or peers providing straight-up coverage of big tech press releases.

No Turing Test for consciousness

The simple fact of the matter is that we don’t have a legitimate, agreed-upon test for AI sentience for the exact same reason we don’t have one for aliens: nobody’s sure exactly what we’re looking for.

Are aliens going to look like us? What if they’re two-dimensional beings who can hide by turning sideways? Will sentient AI take a form we can recognize? Or is Ilya Sutskever correct and is AI already sentient?

Maybe AI is already superintelligent and it knows that coming out as alive would upset a delicate balance. It could be secretly working in the background to make things a tiny bit better for us every day — or worse.

Perhaps AI will never gain sentience because it’s impossible to imbue computer code with the spark of life. Maybe the best we can ever hope for is AGI.

The only thing that’s clear is that we need a Turing Test for consciousness that actually works for modern AI. If some of the smartest people on the planet seem to think we could stumble onto machine sentience at any second, it feels pragmatic to be as prepared for that moment as we possibly can.

But we need to figure out what we’re looking for before we can find it, something easier said than done.

How would you define, detect, and determine machine sentience? Let us know on Twitter.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top