Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on September 16, 2017

AI can make justice truly blind — but not just yet


AI can make justice truly blind — but not just yet

“Justice is blind.”

It’s a wonderful concept that represents an even-handed legal system that is impartial and objective in equal measure. But there’s no denying the fact that the justice system is deeply flawed.

So, could Artificial Intelligence provide the answer? Yes, in the end, but not yet.

We can see the problems in the legal system. The courts are jammed with appeals, cases get thrown out on technicalities and each and every day we see outrageous stories of a judge handing down overly lenient or absurdly severe sentences.

Sentences shouldn’t depend on food intake

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Sometimes it seems that people’s lives are decided on the basis of a judge’s mood and there’s a running joke among lawyers that justice depends on what the judge ate for breakfast.

One famous study of Israeli judges in 2009 revealed that a judge is actually more likely to be lenient after a break. In fact, it found a 65 percent probability of a lenient ruling at the start of the day, which then declined until lunch, before hitting 65 percent again directly after lunch.

Justice should not depend on the time of day your hearing occurs. So, surely, it’s a matter of urgency to replace human judges with AI that won’t make decisions based on the fact that it is hungry?

AI judges could also help clear the backlog of cases that are threatening to drown the American legal system. Plea bargains are becoming increasingly common, too. That can mean dangerous criminals avoid jail altogether and reoffend, purely because of a logjam at the courthouse.

This is no way to run a legal system.

AI is already helping bail hearings

Even in the simplest case of deciding whether to grant bail, one study by the National Bureau of Economic Research revealed that an AI judge could help reduce jail populations by 42 percent and actually cut crime by up to 24 percent.

In New Jersey, that’s already happening. If a defendant meets certain criteria, they are granted bail without paying a bond. This saves the State a significant sum and means the defendant can keep working.

India has 27 million court cases in the system and an AI judicial system could obviously help to clear the simpler ones.

Sentencing and judgements should be an empirical formula, too. So, AI is a natural fit.

Well, it’s not that simple unfortunately.

AI has already been challenged

AI is slowly entering the legal system and the Wisconsin Department of Corrections has turned to Compas, a risk-assessment tool, to help determine the length of its sentences.

It asks offenders a series of questions and then determines their risk of re-offending, which then helps the judge make an informed decision on the severity of the sentence.

Its findings have already been challenged in court, creating another case, and it’s clear that the mainstream media has issues with a judge blindly following the advice of technology that he does not understand.

The tech industry is asking the wider world to place their very liberty in the hands of complex algorithms it cannot possibly hope to explain. That is always going to be a problem.

UCL produced solid but not perfect results

Meanwhile, the University of Central London in the UK revealed its AI judge last year. The algorithm reached the same conclusion as human judge in 79 percent of 584 cases that went before a panel of Judges at the European Court of Human Rights.

This is impressive, but what about that 21 percent? Was the algorithm right or the actual judges?

Either way, the margin of error is just too high to make a valid case for AI right now.

We have much bigger issues than the margin of error that we have to address before we even think of handing over the legal reins completely to machine learning, too.

Can AI judges be more biased?

Firstly, AI is only as good as the base information we feed in and there are numerous examples of AI programs becoming sensationally prejudiced.

Microsoft had to switch off Tay, a Twitter chatbot, after just 24 hours. Fellow Twitter users took it upon themselves to give the bot racist and misogynistic views and the algorithm simply didn’t have the internal filter or worldview to know this was wrong.

A human is aware of inherent bias and the corruption of our own thoughts. So, we can fight against it.

An algorithm simply doesn’t have that ‘gut feeling’. If it heads down the wrong path then it can end up in a very strange place.

An AI Judge could obviously be protected from external influences like Twitter, but it can only learn based on past judgements, which could suffer from inherent bias.

Garbage in, garbage out

Even with every court case in history as a start point we could still have an issue with the outliers. Poor human judgement could cause a butterfly effect in the machine learning program that renders a ridiculous verdict in a real-world situation.

AI simply doesn’t have emotional intelligence, too. It starts from a defined point and that can cause serious issues with political hot potatoes like race.

Last year, Beauty.ai judged an international beauty contest. The algorithm analyzed photographs of 6,000 people and selected 44 winners.

Just one had dark skin.

A handful of Asians made the cut, but the vast majority of the winners were white. The inference was clear: Beauty.ai was racist.

The programmers could have started with the best intentions and even set the system up with diversity in mind. But if certain parameters excluded people of color, then the end result is a PR disaster. In the courtroom, the consequences could be much more severe.

Campaigns against AI are already underway

Civil liberty groups have warned against the inherent dangers of prejudiced AI in the legal system before. Law enforcement agencies now use tools to predict future crime, but several pressure groups argue that the system starts from a flawed and prejudiced base.

“It’s polluted data producing polluted results,” said Malkia Cyril, executive director of the Center for Media Justice.

The simple truth is that machine learning, by its very definition, extrapolates the information we give it to make new conclusions. But common sense is a hard thing to put into the system.

Of course, there are solutions to this problem and adjustments to the algorithms should, eventually, weed out any inherent bias in the system.

But that could take years and a vast amount of trial and error.

AI is set for support role

We do think that AI has a place in the legal system, but it will be a supporting role for the foreseeable future.

AI will prove invaluable and the system will get much better over time. Judges will be able to rely on machine learning more and more to provide the correct sentence.

But, much like the AI that controls a self-driving car, we need a fully informed Judge who is ready to take the wheel and who has the final say.

A second set of eyes

AI simply cannot take into account the extenuating circumstances and nuance that are part and parcel of most modern trials. But it should give the judge a set of solid guidelines to work with.

In our view this will form a safer legal system, where the AI keeps the judge in line and the judge keeps a close eye on the computer. They will help each other.

So, even with AI, justice will not be entirely blind. But it will have two sets of eyes on the road.

That should be a massive improvement.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top