This article was published on November 17, 2017

Counterpoint: The case against an AI god


Counterpoint: The case against an AI god

We’d previously written an opinion piece titled “The case for an artificially intelligent god.” This is our counterpoint to that.

It’s a strange time to be a technology journalist. Somehow artificial intelligence has grown from buzzword to a religion, literally. For tech enthusiasts, it can often be more comfortable to wrap our heads around ideas like algorithms and neural networks than religion and faith.

Yet here we are, in a never-ending discussion on the ever-terrifying prediction that one day computers will become smarter and more powerful than humans. This is called ‘the singularity’ and it’s an event a lot of smart people think is definitely going to happen. Ray Kurzweil told Futurism:

2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.

And of course there’s Anthony Levandowski and his “Way of The Future” church. Speaking to Backchannel Levandowski said:

What is going to be created will effectively be a god. It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it.

In the previous article on the subject of an AI god, I posited that an AI religion would be based on blind faith, simply because people won’t care if the god they’ve chosen is real or not. They wont be worshiping code, but instead the idea it represents to them.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

These hypothetical people aren’t worshiping a god though, but a machine, like a toaster — no matter the volume of their faith.

AI can’t be lazy, it doesn’t get upset when you lock it in a room by itself. You can’t threaten a robot with unplugging. Sophia, the first robot given citizenship, doesn’t know what it means to be a citizen and it doesn’t believe any of the things it says anymore than my Alexa speaker believes the lyrics of a song it’s playing.

Acting like a person isn’t an endorsement of humanity, nor is being programmed to be a god a characterization of divinity. Worshiping a toaster doesn’t make the toaster smart (or brave) – but to each their own. I’m certainly not disparaging the good people at “Way of The Future.” In fact, I hope they find digital happiness.

The singularity might be more like a computer that can pass the Turing test. And it would be incredible to meet an AI so intelligent that it was indistinguishable from a human in conversation.

On the other hand – who cares if a robot can convince an expert that it’s human via a chat interface? Humans are obnoxious twits on social media and they use piss-poor grammar in texts. If we’re going to make robots that chat like us, count me out.

A world where one robot goes on Twitter to talk shit about a different ‘breed’ of robot, or sends unwanted ‘circuit pics,’ sounds ridiculous – but how else is an AI supposed to pass the Turing test now?

Screw the Turing test, let’s make robots great. If developers choose to focus on making AI appear as human as possible they aren’t creating a better machine, they’re just taking shortcuts to make people think they are.

A true singularity – a computer with motivation, desire, and self-awareness – would result in a sentient creature that deserved to live. It would be an irreplaceable soul capable of inhabiting at least one machine that humans can interface with.

You could piss it off, and it could choose not to work with you. That’s not helpful.

Machines don’t want to live. That’s a fact – they lack the capacity for desire. My watch doesn’t want to tell me what time it is, but it’s not adverse to doing so either. The same goes for my coffee pot, my shoes, and my smartphone: they do their job because they are tools.

If the singularity is what occurs when computers are smarter than people, we should stop looking at it like an event. The singularity has been happening since the creation of the first computer thousands of years ago. Computers will almost certainly be smarter than humans – and according to many experts this may even occur in our lifetimes.

And yes, people will surely worship those computers as Levandowski told Backchannel:

In the future, if something is much, much smarter, there’s going to be a transition as to who is actually in charge. What we want is the peaceful, serene transition of control of the planet from humans to whatever. And to ensure that the ‘whatever’ knows who helped it get along.

But what if machines never develop the incredibly human trait of ‘giving a shit’ about anything?

Maybe the worst case scenario is Professor Nick Bostrom’s paperclip maximizer theory, which imagines a mindless AI turning the entire world into a paperclip factory because it was designed to optimize the manufacturing of paperclips. That sounds pretty bleak, but it’s better than machines that actually want to kill us.

We’ve got enough of that already with humans regularly murdering one another – often over issues pertaining to religion, ironically.

If you take a stance that computers will never ‘care’ about or ‘desire’ anything, it seems like a waste of time to worship them. Even if computers do one day look upon us like pets, as Levandowski says they may, they won’t care whether we’re good dogs or bad ones.

If it isn’t provocative to say that a real God – a deity — should appreciate when humans strive to do good even if they aren’t the ‘right religion,’ it should also be safe to say that our robot overlords will appreciate our efforts at being the best humans we can be more than any tithes, rituals, or hymns.

Maybe Levandowski should focus his church on creating the AI that best serves humankind. That way, we might still be okay whether he’s right or not.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top