Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on November 23, 2020

Eureka: A family of computer scientists developed a blueprint for machine consciousness


Eureka: A family of computer scientists developed a blueprint for machine consciousness Image by: Rog01

Renowned researchers Manuel Blum and Lenore Blum have devoted their entire lives to the study of computer science with a particular focus on consciousness. They’ve authored dozens of papers and taught for decades at prestigious Carnegie Mellon University. And, just recently, they published new research that could serve as a blueprint for developing and demonstrating machine consciousness.

That paper, titled “A Theoretical Computer Science Perspective on Consciousness,” may only a be a pre-print paper, but even if it crashes and burns at peer-review (it almost surely won’t) it’ll still hold an incredible distinction in the world of theoretical computer science.

The Blum’s are joined by a third collaborator, one Avrim Blum, their son. Per the Blum’s paper:

All three Blums received their PhDs at MIT and spent a cumulative 65 wonderful years on the faculty of the Computer Science Department at CMU. Currently the elder two are emeriti and the younger is Chief Academic Officer at TTI Chicago, a PhD-granting computer science research institute focusing on areas of machine learning, algorithms, AI (robotics, natural language, speech, and vision), data science and computational biology, and located on the University of Chicago campus.

This is their first joint paper.

Hats off to the Blums, there can’t be too many theoretical computer science families at the cutting-edge of machine consciousness research. I’m curious what the family pet is like.

Let’s move on to the paper shall we? It’s a fascinating and well-explained bit of hardcore research that very well could change some perspectives on machine consciousness.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Per the paper:

Our major contribution lies in the precise formal definition of a Conscious Turing Machine (CTM), also called a Conscious AI. We define the CTM in the spirit of Alan Turing’s simple yet powerful definition of a computer, the Turing Machine (TM). We are not looking for a complex model of the brain nor of cognition but for a simple model of (the admittedly complex concept of) consciousness.

In this context, a CTM would appear to be any machine that can demonstrate consciousness. The big idea here isn’t necessarily the development of a thinking robot, but more so a demonstration of the core concepts of consciousness in hopes we’ll gain a better understanding of our own.

This requires the reduction of consciousness to something that can be expressed in mathematical terms. But it’s a little more complicated than just measuring waves. Here’s how the Blum’s put it:

An important major goal is to determine if the CTM can experience feelings not just simulate them. We investigate in particular the feelings of pain and pleasure and suggest ways that those feelings might be generated. We argue that even a complete knowledge of the brain’s circuitry – including the neural correlates of consciousness – cannot explain what enables the brain to generate a conscious experience such as pain.

We propose an explanation that works as well for robots having brains of silicon and gold as for animals having brains of flesh and blood. Our thesis is that in CTM, it is the architecture of the system, its basic processors; its expressive inner language that we call Brainish; and its dynamics (prediction, competition, feedback and learning); that make it conscious.

Defining consciousness is only half the battle – and one that likely won’t be won until after we’ve aped it. The other side of of the equation is observing and measuring consciousness. We can watch a puppy react to stimulus. Even plant consciousness can be observed. But for a machine to demonstrate consciousness its observers have to be certain it isn’t merely imitating consciousness through clever mimicry.

Let’s not forget that GPT-3 can blow even the most cynical of minds with its uncanny ability to seem cogent, coherent, and poignant (let us also not forget that you have to hit “generate new text” a bunch of times to get it to do so because most of what it spits out is garbage).

The Blums get around this problem by designing a system that’s only meant to demonstrate consciousness. It won’t try to act human or convince you it’s thinking. This isn’t an art project. Instead, it works a bit like a digital hourglass where each grain of sand is information.

The machine sends and receives information in the form of “chunks” that contain simple pieces of information. There can be multiple chunks of information competing for mental bandwidth, but only one chunk of information is processed at a time. And, perhaps most importantly, there’s a delay in sending the next chunk. This allows chunks to compete – with the loudest, most important one often winning.

The winning chunks form the machine’s stream of consciousness. This allows the machine to demonstrate adherence to a theory of time and for it to experience the mechanical equivalent of pain and pleasure. According to the researchers, the competing chunks would have greater weight if the information they carried indicated the machine was in extreme pain:

Less extreme pain and chronic pain do not so much prevent other chunks from reaching the stage as make it “difficult” for them to reach it. In the deterministic CTM, the difficulty for a chunk to get into STM is measured by how much greater the chunk’s intensity would have to be for it to get into STM. In the probabilistic CTM, the difficulty is measured by how much greater the chunk’s intensity would have to be to get allotted a “suitably larger” share of time in STM.

A machine programmed with such a stream of consciousness would effectively have the bulk of its processing power (mental bandwidth) taken up by extreme amounts of pain. This, in theory, could motivate it to repair itself or deal with whatever’s threatening it.

But, before we get that far, we’ll need to actually figure out if reverse-engineering the idea of consciousness down to the equivalent of high-stakes reinforcement learning is a viable proxy for being alive.

You can read the whole paper here.

For more coverage on robot brains check out Neural’s optimistic speculation on machine sentience in our newest series here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top