Celebrate King's Day with TNW 🎟 Use code GEZELLIG40 on your Business, Investor and Startup passes today! This offer ends on April 29 →

This article was published on February 20, 2018

Do you want a black box AI deciding whether you live or die?


Do you want a black box AI deciding whether you live or die?

We may already feel cozy about artificial intelligence making ordinary decisions for us in our daily life. From product and movie recommendations on Netflix and Amazon to friend suggestions on Facebook, tailored advertisements on Google search result pages and auto corrections in virtually every app we use, artificial intelligence has already become ubiquitous like electricity or running water.

But what about profound and life-changing decisions like in the judiciary system when a person is sentenced based on algorithms he isn’t even allowed to see.

A few months ago, when Chief Justice John G. Roberts Jr. visited the Rensselaer Polytechnic Institute in upstate New York, Shirley Ann Jackson, president of the college, asked him “when smart machines, driven with artificial intelligences, will assist with courtroom fact-finding or, more controversially even, judicial decision-making?”

The chief justice’s answer was truly startling. “It’s a day that’s here,” he said, “and it’s putting a significant strain on how the judiciary goes about doing things.”

In the well-publicized case Loomis v. Wisconsin, where the sentence was partly based on a secret algorithm, the defendant argued without success that the ruling is unconstitutional since neither he nor the judge was allowed to inspect the inner workings of the computer program.

Northpointe Inc., the company behind Compas, the assessment software that deemed Mr. Loomis of having an above the average risk factor, was not ready to disclose its algorithms, and said earlier last year, “The key to our product is the algorithms, and they’re proprietary.”

“We’ve created them, and we don’t release them because it’s certainly a core piece of our business,” one of its executives added.

Computationally calculated risk assessments are increasingly common in U.S. courtrooms and are handed to judicial decision makers at every stage of the process. These so-called risk factors help judges to decide about the bond amounts, how harsh the sentence should be or even whether the defendant could be set free. In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such assessments are given to judges during criminal sentencing.

But for the sake of argument, let’s assume for a moment that every piece of software and algorithm used by governmental institutions like the law enforcement or courts is properly peer-reviewed and vetted under comprehensive regulations. The problem here isn’t just about the companies and government being transparent about their algorithms and methods but also about how to interpret and understand something that is a black box to humans.

Most modern artificially intelligent systems are based on a derivative of machine learning. In simple terms, machine learning is about training an artificial neural network with already labeled data to help it understand general concepts out of special cases. It’s all about statistics. By feeding thousands upon thousands of prepared data to the network, you enable the system to gradually fine tune the weight of the individual neurons in a specific layer. The end result is a complex reading of all the neurons weighing in to have a say about the end result.

Much like how our brain works, the inner workings of a trained model isn’t like traditional rule-based algorithms where for each input there is a predefined output. The only thing we can do is to try to create the best model possible and train it with as much unbiased data we can get our hands on. The rest is a mystery to scientists.

When it comes to distinguishing a dog from a cat, current artificial intelligence technology does a pretty good job. And if it mistakenly calls a cat a dog, well, it’s not that a big deal. But life and death decisions are another matter altogether.

Take self-driving cars for instance. There are already projects where there isn’t a single line of code written by engineers that drive the car. In 2016, Nvidia’s deep learning algorithms, for instance, ran an autonomous car that had learned how to drive just by watching a human driver. When you contemplate the consequences, it becomes a little bit disturbing.

Think about the classic situation when the car faces a little girl running across the street followed by her dad.

The car has to decide between colliding with the kid, the father or run into the nearby crowd. Statistically, because that’s how machine learning basically works, there is a 20 percent chance of fatally hitting the girl, 60 percent chance of fatally hitting the father and 10 percent chance of fatally hitting two of the bystanders.

There is also the safety calculations for the passengers of the car. Hitting the girl bears a 20 percent chance of severely injuring them while there is only a 10 percent chance when hitting the father. How should the car decide? What would you have done?

In retrospect, it isn’t even possible to deduct and clarify the car’s decision-making process out of the AI black box. You get the idea.

It’s also about the data we feed the machines. In 2016, ProPublica, showed a case where machine bias deemed a black woman more high risk than a white man, while all their previous records showed otherwise.

For the better or worse, at the end of the day, there is no denying that AI will conquer every industry and aspect of our lives. From the military to our schools and shopping centers, AI will become so omnipresent that we won’t even feel it. History has shown us time and again that it is not about the technology, but about how we use it.

As Melvin Kranzberg’s first law of technology states, “Technology is neither good nor bad; nor is it neutral.” It’s about the human species to put it to good use.

This story is republished from TechTalks, the blog that explores how technology is solving problems… and creating new ones. Like them on Facebook here and follow them down here:

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top