This article was published on November 18, 2020

Neural’s guide to the glorious future of AI: Here’s how machines become sentient


Neural’s guide to the glorious future of AI: Here’s how machines become sentient

Welcome to Neural’s guide to the glorious future of AI. What wonders will tomorrow’s machines be capable of? How do we get from Alexa and Siri to Rosie the Robot and R2D2? In this speculative science series we’ll put our optimist hats on and try to answer those questions and more. Let’s start with a big one: The Singularity.

The future realization of robot lifeforms is referred to by a plethora of terms – sentience, artificial general intelligence (AGI), living machines, self-aware robots, and so forth – but the one that seems most fitting is “The Singularity.”

Rather than debate semantics, we’re going to sweep all those little ways of saying “human-level intelligence or better” together and conflate them to mean: A machine capable of at least human-level reasoning, thought, memory, learning, and self-awareness.

Modern AI researchers and developers tend to gravitate towards the term AGI. Normally, we’d agree because general intelligence is grounded in metrics we can understand – to qualify, an AI would have to be able to do most stuff a human can.

But there’s a razor-thin margin between “as smart as” and “smarter than” when it comes to hypothetical general intelligence and it seems likely a mind powered by super computers, quantum computers, or a vast network of cloud servers would have far greater sentient potential than our mushy organic ones. Thus, we’ll err on the side of superintelligence for the purposes of this article.

Before we can even start to figure out what a superintelligent AI would be capable of, however, we need to determine how it’s going to emerge. Let’s make some quick decisions for the purposes of discussion:

  1. Deep learning, symbolic AI, or hybrid AI either aren’t going to pan out or will require serious overhauls to bridge the gap between modern machine learning and the sentient machines of tomorrow.
  2. AGI won’t emerge by a weird act of God like a military assault robot miraculously becoming alive after being struck by lightning.

So how will our future metal buddies gain the spark of consciousness? Let’s get super scientific here and crank out a listicle with five separate ways AI could gain human-level intelligence and awareness:

  1. Machine consciousness is back-doored via quantum computing
  2. A new calculus creates the Master Algorithm
  3. Scientists develop 1:1 replication of organic neural networks
  4. Cloud consciousness emerges through scattered node optimization
  5. Alien technology

Quantum AI

In this first scenario, if we predict even a modest year-over-year increase in computation and error-correction abilities, it seems entirely plausible that machine intelligence could be brute-forced into existence by a quantum computer running strong algorithms in just a couple centuries or so.

Basically, this means the incredibly potent combination of exponentially increasing power and self-replicating artificial intelligence could cook up a sort of digital, quantum, primordial soup for AI where we just toss in some parameters and let evolution take its place. We’ve already entered the era of quantum neural networks, a quantum AGI doesn’t seem all that far-fetched.

A New Calculus Arrives

What if intelligence doesn’t require power? Sure, our fleshy bodies need energy to continue being alive and computers need electricity to run. But perhaps intelligence can exist without explicit representation. In other words: what if intelligence and consciousness can be reduced to purely mathematical concepts that only when properly executed became apparent?

A researcher by the name of Daniel Buehrer seems to think this could be possible. They wrote a fascinating research paper proposing the creation of a new form of calculus that would, effectively, allow an intelligent “master algorithm” to emerge from its own code.

The master algorithm idea isn’t new — the legendary Pedro Domingos literally wrote the book on the concept — but what Buehrer’s talking about is a different methodology. And a very cool one at that.

Here’s Buehrer’s take on how this hypothetical self-perpetuating calculus could unfold into explicit consciousness:

Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world.

They even go on to propose that such a consciousness would be capable of having little internal thought wars to determine which actions occurring in the machine’s mind’s eye should be effected into the physical world. The whole paper is pretty wild, you can read more here.

A Perfect Model of the Human Brain

This one’s pretty easy to wrap your head around (pun intended). Instead of a bunch of millionaire AI developers with billion-dollar big tech research labs figuring out how to create a new species of intelligent being out of computer code, we just figure out how to create a perfect artificial brain.

Easy right? The biggest upside here would be the potential for humans and machines to occupy the same spaces. This is clearly a recipe for augmented humans – cyborgs. Perhaps we could become immortal by transferring our own consciousnesses into non-organic brains. But the bigger picture would be the ability to develop robots and AI in the true image of humans.

If we can figure out how to make a functional replica of the human brain, including the entire neural network housed within it, all we’d need to do is keep it running and shovel the right components and algorithms into it.

Cloud Consciousness

Maybe conscious machines are already here. Or maybe they’ll quietly show up a year or a hundred years from now completely hidden in the background. I’m talking about cloud consciousness and the idea that a self-replicating, learning AI created solely to optimize large systems could one day gain a form of sentience that would, qualitatively, indicate superintelligence but otherwise remain unnoticed by humans.

How could this happen? Imagine if Amazon Web Services or Google Search released a cutting-edge algorithm into their respective systems a few decades from now and it created its own self-propagating solution system that, through the sheer scope of its control, became self-aware. We’d have a ghost in the machine.

Since this self-organized AI system wouldn’t have been designed to interface with humans or translate its interpretations of the world it exists in into something humans can understand, it stands to reason that it could live forever as a superintelligent, self-aware, digital entity without ever alerting us to its presence.

For all we know there’s a living, sentient AI chilling out in the Gmail servers just gathering data on humans (note: there almost certainly isn’t, but it’s a fun thought exercise).

Alien Technology

Don’t laugh. Of all the methods by which machines could hypothetically gain true intelligence, alien tech is the most likely to make it happen in our lifetimes.

Here we can make one of two assumptions: Aliens will either visit us sometime in the near future (perhaps to congratulate us on achieving quantum-based interstellar communication) or we’ll discover some ancient alien technology once we put humans on Mars within the next few decades. These are the basic plots of Star Trek and the Mass Effect video game series respectively. 

Here’s hoping that, no matter how The Singularity comes about, it ushers in a new age of prosperity for all intelligent beings. But just in case it doesn’t work out so well, we’ve got something that’ll help you prepare for the worst. Check out these articles in Neural’s Beginner’s Guide to the AI Apocalypse series:

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top