This article was published on June 19, 2011

What is the Technological Singularity?


What is the Technological Singularity?

Moore’s Law has been around for 46 years. It’s a descriptor for the trend we’ve seen in the development of computer hardware for decades, with no sign of slowing down, where the number of transistors that can be placed on an integrated circuit doubles every two years.

The law is named after Gordon Moore, who described this pattern in 1965. He would know a thing or two about integrated circuits. He co-founded Intel in 1968.

Moore has said in recent years that there’s about 10 or 20 years left in this trend, because “we’re approaching the size of atoms which is a fundamental barrier.” But then, he said, we’ll just make bigger chips.

Ray Kurzweil, who we mentioned in last weekend’s piece on transhumanism, is known for his thoughts on another subject even more than he is known for his thoughts on transhumanism. That subject is the technological Singularity.

The singularity comes after the time when our technological creations exceed the computing power of human brains, and Kurzweil predicts that based on Moore’s Law and the general trend of exponential growth in technology, that time will come before the mid-21st century.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

We’ll see artificial intelligence that exceeds human intelligence around the same time, he says. But there’s more to it than just having created smarter intelligences. There are profound ramifications, but we’ll get to those soon.

Technological singularity was a term coined by Vernor Vinge, the science fiction author, in 1983. “We will soon create intelligences greater than our own,” he wrote. “When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.”

He was unifying the thoughts of many of his predecessors, Alan Turing and I. J. Good among them.

The idea is that when we become capable of creating beings more intelligent than us, it stands to reason that they — or their near-descendants — will be able to create intelligences more intelligent than themselves. This exponential growth of intelligences would work much like Moore’s Law — perhaps we can call it Kurzweil’s Law — but have more profound significance. When there are intelligences capable of creating more intelligent beings in rapid succession, we enter an age where technological advances move at a rate we can’t even dream of right now.

And that’s saying something: thanks to the nature of exponential growth, technological advance is already making headway at the fastest pace we’ve ever seen.

The singularity doesn’t refer so much to the development of superhuman artificial intelligence — although that is foundational to the concept — as it does to the point when our ability to predict what happens next in technological advance breaks down.

What Will the Singularity Look Like?

Singularitarians say that we simply can’t imagine what such a future would be like. It’s hard to flaw that logic. Imagine, in a world where human intelligence is near the bottom of the ladder, what the world would look like even a short decade later. The short answer is: you can’t! The point is that as more intelligent beings they’ll be capable of not just imagining, but creating things we can’t even dream about.

We can speculate as to the changes the Singularity would bring that would enable that exponential growth to continue. Once we build computers with processing power greater than the human brain and with self-aware software that is more intelligent than a human, we will see improvements to the speed with which these artificial minds can be run. Consider that with faster processing speeds, these AIs could do the thinking of a human in shorter amounts of time: a year’s worth of human processing would become eight months, then eventually weeks, days, minutes and at the far end of the spectrum, even seconds.

There is some debate about whether there’s a ceiling to the processing speed of intelligence, though scientists agree that there is certainly room for improvement before hitting that limit. As with speculation in general, nobody can really speculate as to where that limit may sit, but it’s still fascinating to imagine an intelligence doing the thinking that a human does in one year in one minute.

With that superhuman intelligence and incredibly fast, powerful processing power, it’s not a stretch to imagine that software re-writing its own source code as it arrives at new conclusions and attempts to progressively improve itself.

The Age of the Posthuman

What’s interesting is that there is potential for such post-Singularity improvements to machine speed and intelligence to crossover to human minds. Futurists speculate that such advanced technology would enable us to improve the processing power, intelligence and accessible memory limits of our own minds through changing the structure of the brain, or ‘porting’ our minds on to the same hardware that these intelligences will run on.

In last week’s piece I asked whether we’d be able to tell when we crossed the line from transhuman to posthuman, or whether that line would be ever-moving as we found new ways to augment ourselves.

But here’s another, contrary question: could the Singularity, should it arrive, bring the age of the posthuman? If we are able to create superhuman intelligence and then upgrade our own intelligence by changing the fundamental structure of our minds, is that posthuman enough?

Augmentation is one thing, and upgrading human blood to vasculoid and allowing us to switch off emotions when we need to avoid an impulse purchase are merely augmentations. Increasing our baseline intelligence and processing speed seems to me to be much more significant: an upgrade over an augment.

There is, of course, no reason to think that our creations would have any interest in us or improving the hardware on which we currently run. Many science fiction authors have postulated that superhuman artificial intelligence would in fact want us extinct, given that our species’ behavior doesn’t lend itself to sustainability.

Is the Singularity Near?

The real question, of course, is whether such a technological singularity will ever happen. Just because it has been predicted by some doesn’t mean it will, and there’s plenty of debate on both sides of the argument. Ever the technological optimist, I’m going to avoid the question in this piece — though that’s not to say I don’t think it’s an important one. You can have a look at David Brin’s fantastic article, Singularities and Nightmares: Extremes of Optimism and Pessimism About the Human Future, for more discussion of that question. I’m fond of this quote from Brin’s piece:

“How can models, created within an earlier, cruder system, properly simulate and predict the behavior of a later and vastly more complex system?”

Of course, if you accept that quote as the basis for any argument, it’s just as hard to map the progress of and towards the singularity as it is to deny that it will happen.

According to Kurzweil’s predictions, we will see computer systems as powerful as the human brain in 2020. We won’t have created artificial intelligence until after 2029, the year in which Kurzweil predicts we will have reverse-engineered the brain. It’s that breakthrough that will allow us to create artificial intelligence, and begin to explore other ideas like that of mind uploading.

Current trends certainly don’t oppose such a timeline, and in 2009, Dr Anthony Berglas wrote in a paper entitled “Artificial Intelligence Will Kill Our Grandchildren” that:

“A computer that was ten thousand times faster than a desktop computer would probably be at least as computationally powerful as the human brain. With specialized hardware it would not be difficult to build such a machine in the very near future.”

Important to consider is that if Kurzweil’s predictions come true, in 2029 when we’ve reverse engineered the brain we would have already had nine years of improvement on those computer systems with brain-like power and capacity. In this timeline, as soon as we create artificial intelligence it will already be able to think faster and with faster access to more varied input than humans thanks to the hardware it runs on.

By 2045, Kurzweil says, we will have expanded the capacity for intelligence of our civilization — comprised by that stage of both software and people — one billion fold.

One only needs to look at history to see our capacity for rapid improvement in retrospect. One of my favorite metrics is life expectancy. In 1800, the average life expectancy was 30, mostly due to high infant mortality rates — though the kind of old age we see as common today was a rare event then. In 2000, the life expectancy of developed countries was 75. If we can more than double the average life expectancy in our society in the space of a historical blip, there’s much more to be excited about ahead.

Get the TNW newsletter

Get the most important tech news in your inbox each week.