Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on July 17, 2020

A beginner’s guide to the AI apocalypse: Artificial stupidity


A beginner’s guide to the AI apocalypse: Artificial stupidity

Welcome to the latest article in TNW’s guide to the AI apocalypse. In this series we’ll examine some of the most popular doomsday scenarios prognosticated by modern AI experts. 

In this edition we’re going to flip the script and talk about something that might just save us from being destroyed by our robot overlords on September 23, 2029 (random date, but if it actually happens your mind is going to be blown), and that is: artificial stupidity.

But first, a few words about humans.

You won’t find any comprehensive data on the subject outside of the testimonials at the Darwin Awards, but stupidity is surely the biggest threat to humans throughout all of history.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Luckily we’re still the smartest species on the planet, so we’ve managed to remain in charge for a long time despite our shortcomings. Unfortunately a new challenger has entered the arena in the form of AI. And despite its relative infancy, artificial intelligence isn’t as far from challenging our status as the apex intellects as you might think.

The experts will tell you that we’re really far away from human-level AI (HLAI). But maybe that’s because nobody’s quite sure what the benchmark for that would be. What should “a human” be able to do? Can you play the guitar? I can. Can you play the piano? I can’t.

Sure, you can argue that a human-level AI should be able to learn to play the guitar or the piano, just like a human can – many play both. But the point is that measuring human ability isn’t a cut-and-dry endeavor.

Computer scientist Roman Yampolskiy, of the university of Louisville, recently published a paper discussing this exact concept. He writes:

Imagine that tomorrow a prominent technology company announces that they have successfully created an Artificial Intelligence (AI) and offers for you to test it out.

You decide to start by testing developed AI for some very basic abilities such as multiplying 317 by 913, and memorizing your phone number. To your surprise, the system fails on both tasks.

When you question the system’s creators, you are told that their AI is human-level artificial intelligence (HLAI) and as most people cannot perform those tasks neither can their AI. In fact, you are told, many people can’t even compute 13 x 17, or remember name of a person they just met, or recognize their coworker outside of the office, or name what they had for breakfast last Tuesday.

The list of such limitations is quite significant and is the subject of study in the field of Artificial Stupidity.

Trying to define what HLAI should and shouldn’t be able to do is just as difficult as trying to define the same for an 18-year-old human. Change a tire? Run a business? Win at Jeopardy?

This line of reasoning usually swings the conversation to narrow intelligence versus general intelligence. But here we run into a problem as well. General AI is, hypothetically, a machine capable of learning any function in any domain that a human can. That means a single GAI should be capable of replacing any human in the entire world given proper training.

Humans don’t work that way however. There’s no general human intelligence. The combined potential for human function is not achievable by an individual. If we build a machine capable of replacing any of us, it stands to reason it will.

And that’s cause for concern. We don’t consider which ants are most talented when we wreck an anthill to build a softball field, why should our intellectual superiors?

The good news is that most serious AI experts don’t think GAI will happen anytime soon, so the most we’ll have to deal with is whatever fuzzy definition of HLAI the person or company who claims it comes up with. Much like Google decided it had achieved quantum supremacy by coming up with an arbitrary (and disputed) benchmark, it’ll surprise nobody in the industry if, for example, the AI crew at Facebook determines that a specific translation algorithm they’ve invented meets their self-imposed criteria for HLAI (or something like that). Maybe it’ll be Amazon or OpenAI.

The bad news is that you also won’t find many reputable scientists willing to rule GAI out. And that means we could be an “eureka!” or two away from someone like Ian Goodfellow oopsing up an algorithm that ties general intelligence to hardware. And when that happens, we could be looking at Bostrom’s Paperclip Maximizer in full effect. In other words: the robots won’t kill us out of spite, they’ll just forget we exist and transform the world and its habitats to suit their needs just as we did.

That’s one theory anyway. And, as with any potential extinction scenario, it’s important to have a plan to stop it. Based on the fact that we can’t know exactly what’s going to happen once a superintelligent artificial being emerges, we should probably just start hard-coding “artificial stupidity” into the mix.

The right dose of unwavering limitations – think Asimov’s Laws of Robotics but more specific to the number of parameters or compute a specific model can use and what level of network integration can exist between disparate systems — could spell the difference between our existence and extinction.

So, rather than attempting to program advanced AI with a philosophical view on the sanctity of human life and what constitutes the greater good, we should just hamstring them with artificial stupidity from the start. 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with