Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on December 8, 2017

How to stop fearing black box AI and love the robot-ruled future


How to stop fearing black box AI and love the robot-ruled future

A winding thread throughout the entire “AI is going to rise up and destroy all humans” narrative is the terrifying concept of deep learning occurring in a black box. How do you provide oversight for a system you can’t understand? Just like any other tool, however, it’s not whether black box AI represents a danger or not, it’s how we choose to use it.

The headlines declare this kind of AI untrustworthy, and tell us that experts seek to end its use in government. But what is black box AI?

When all we know about a computer system is its input and output, but not how the machine determines the results that lead to the output, the mysterious part exists in a “black box” we cannot see inside of.

That might sound scary, but, as we explained before – it really isn’t. Some people can perform long division in their heads without having to “show their work,” and this isn’t much different.

Granted, just like your high school algebra teacher demanded you explain how you arrived at your conclusion, there are very valid reasons why we might need to know how an AI got to its.

Chief among those reasons is to avoid bias. Bias has become the battle-cry for people who think the government should regulate the crap out of AI.

ProPublica’s award-winning piece on biased AI revealed a sinister form of algorithm-based racism that saw Black men further treated unfairly by the US criminal justice system.

If getting rid of black box AI could end racism this discussion would be over, we’d say ditch it. However, if you want to end black box bias, the easiest way to do that is to follow the expert advice of the AI Now institute and stop using it for government functions that require accountability.

Anyone who advocates for a total moratorium on black box AI until we can sort this whole thing out, is also asking researchers to stop pushing the cutting edge of cancer research.

There’s an alarming conflation of arguments that could occur if the general tech community continues to allow the fear-mongering rhetoric claiming black box AI is dangerous.

If our policy makers believe this, and place restrictions or regulations on deep learning systems, it is imperative that they do so with the full knowledge those restrictions could stifle life-saving research.

The situation at hand, concerning black box AI, doesn’t call for restrictions on development of the technology. Instead we need to understand the ethics of using black box AI.

Here’s a couple of questions that may help you decide whether ignorance is bliss or not.

  1. Some black box AI can diagnose cancer with better accuracy than humans. Would you accept the increased odds even though you’ll never know exactly how the computer arrived at its determination?
  2. You’re about to be sentenced for a crime and a computer is going to determine whether you should go to prison or not. It decides you deserve 20 years for a first time misdemeanor offense, but nobody knows why. Is this okay?

Every situation calls for a different set of protocols. When humans could be affected negatively by a lack of understanding, we should probably avoid using black box systems until we get better at eradicating bias.

When humans are already dying, such as in millions of car accidents, or because cancer wasn’t diagnosed in time – it seems unethical not to employ black box AI just because it doesn’t show the work.

You can create a doomsday scenario for any “uncontrolled” use of AI, but black box deep learning isn’t some wild magical force that humans are too stupid to understand. It’s the result of unraveling math shortcuts that people use because we don’t have time to sit around and count exponential factors.

Now, before I ruin everyone’s day, I will concede that AI is probably going to rise up and murder every single one of us in cold blood one day. Or cold oil. Or whatever. But it likely won’t be because of black box deep learning.

The important thing is that we don’t use AI to make the tough decisions people don’t want to. There should never exist a computer capable of taking race into account when dishing out “blind” justice.

We should continue to develop AI at the cutting edge, even if this means creating systems which can be exploited by bad or stupid humans. And we should do so for the same reason we develop sharper knives and stronger rope. This is not to better suit the purposes of evil or ignorant people, but because it has the potential to benefit all of humanity.

The problem with black box AI isn’t with how it figures things out, it’s with how we choose to use it.

On the other hand, if black box AI is directly responsible for sending robots back in time to kill us all: my bad.

Get the TNW newsletter

Get the most important tech news in your inbox each week.