You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on June 20, 2020

Dangerous AI algorithms and how to recognize them


Dangerous AI algorithms and how to recognize them Image by: Piqsels

When discussing the threats of artificial intelligence, the first thing that comes to mind are images of Skynet, The Matrix, and the robot apocalypse. The runner up is technological unemployment, the vision of a foreseeable future in which AI algorithms take over all jobs and push humans into a struggle for meaningless survival in a world where human labor is no longer needed.

Whether any or both of those threats are real is hotly debated among scientists and thought leaders. But AI algorithms also pose more imminent threats that exist today, in ways that are less conspicuous and hardly understood.

In her book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, mathematician Cathy O’Neil explores how blindly trusting algorithms to make sensitive decisions can harm many people who are on the receiving end of those decisions.

The dangers of AI algorithms can manifest themselves in algorithmic bias and dangerous feedback loops, and they can expand to all sectors of daily life, from the economy to social interactions, to the criminal justice system.

While the use of mathematics and algorithms in decision-making is nothing new, recent advances in deep learning and the proliferation of black-box AI systems amplify their effects, both good and bad. And if we do not understand the present threats of AI, we will not be able to benefit from its advantages.

[Read: The advantages of self-explainable AI over interpretable AI]

The characteristics of dangerous AI algorithms

We use algorithms to model to understand and process many things. “A model, after all, is nothing more than an abstract representation of some process, be it a baseball game, an oil company’s supply chain, a foreign government’s actions, or a movie theater’s attendance,” O’Neil writes in Weapons of Math Destruction. “Whether it’s running in a computer program or in our head, the model takes what we know and uses it to predict responses in various situations.”

But more and more of those models are being transferred from our heads to computers, thanks to advances in deep learning and the increased digitization of every aspect of our lives. Thanks to broadband internet, cloud computing, mobile devices, the internet of things (IoT), wearables, and a slew of other emerging technologies, we can collect and process more and more data about anything and everything.

This increased access to data and computing power has helped create AI algorithms that can automate an increasing number of tasks. Deep neural networks, which had previously been limited to research laboratories, have found their way into many areas that were previously challenging for computers, such as computer vision, machine translation, speech , and facial recognition.

So far, so good. What can go wrong?

In Weapons of Math Destruction, O’Neil specifies three factors that make AI models dangerous: opacity, scale, and damage.

Algorithmic vs corporate opacity

window rain transparency

There are two aspects to the opacity of AI systems: technical and corporate. The technical opacity,also referred to as the black-box problem of artificial intelligence, has received much attention in the past few years.

In a nutshell, the question is, how do we know an AI algorithm is making the right decision? This question is becoming more critical as AI finds its way into loan application processing, credit scoring, teacher rating, recidivism prediction, and many other sensitive fields.

Many media outlets have published articles that depict AI algorithms as mysterious machines whose behavior is unknown even to their developers. But contrary to what the media portrays, not all AI algorithms are opaque.

Traditional software, often referred to as symbolic artificial intelligence in AI jargon, are known for their interpretable and transparent nature. They are composed of hand-coded rules, meticulously put together by software developers and domain experts. They can be probed and audited, and an error can be traced to the line of code where it has occurred.

In contrast, machine learning algorithms, which have become increasingly popular in recent years, develop their behavior by analyzing many training examples and creating statistical inference models. This means that the developers don’t necessarily have the final say on how the AI algorithms behave.

But again, not all machine learning models are opaque. For instance, decision trees and linear regression models, two popular machine learning algorithms, will give clear explanations of the factors that determine their decisions. If you train a decision tree algorithm to process loan applications, it can provide you with a tree-like breakdown (thus the name) of how it decides which loan applications to confirm and which to reject. This provides developers with a chance to discover potentially problematic factors and correct the model.

loan application decision tree
A decision tree provides a detailed breakdown of its decision process (Source: Medium)

But deep neural networks, which have become very popular in the past few years, are especially bad at revealing how they work. They are composed of layers upon layers of artificial neurons, small mathematical functions that tune their parameters to the thousands of examples they see during training. In many cases, it’s very hard to probe deep learning models and determine which factors contribute to their decision-making processes.

A loan application processing deep learning algorithm is an end-to-end model where a loan application goes in and a final verdict comes out. There’s not a feature-by-feature breakdown on how the AI algorithm is making decisions. In most cases, a well-trained deep learning model will perform better than its less-sophisticated siblings (decision trees, support vector machines, linear regression, etc.), and it might even spot relevant patterns that will go unnoticed to human experts.

However, even the most accurate deep learning systems make errors every once in a while, and when they do, it will be very hard to determine what went wrong. But a deep learning system doesn’t need to make errors before its opacity turns problematic. Suppose an angry customer wants to know why an AI application has turned down their loan application. When you have an interpretable AI system, you’ll be able to provide a clear explanation of the steps that went into the decision. When you have an opaque system, you can just shrug and say, “The computer said so.”

But while the technical opacity of artificial intelligence algorithms has received a lot of attention in tech media, what’s less discussed is the opaque ways companies use their algorithms, even when the algorithms themselves are trivial and interpretable.

“Even if the participant is aware of being modeled, or what the model is used for, is the model opaque, or even invisible?” O’Neil questions in Weapons of Math Destruction.

Companies that view their AI algorithms as corporate secrets try their best to hide them behind walled gardens to keep the edge over their competitors. We don’t know much about the AI algorithm powering Google Search, the models that define our friend suggestions on Facebook, populate our feed on Twitter, among others.

Some of this secrecy is justified. For instance, if Google published the inner-workings of its search algorithm, then it would become vulnerable to gaming. In fact, even without Google revealing much detail about its search algorithm, there’s an entire industry poised to find shortcuts to Google Search top-ranking positions. Algorithms are, after all, mindless machines that play by their own rules. They don’t use common sense judgment to identify evil actors who twist the rules for devious intentions.

But staying on the same example, without transparency, how can we make sure that Google itself is not manipulating search results to serve its own political goals and economic interests? In 2018, U.S. President Donald Trump accused Google of burying conservative news outlets in its search results and favoring liberal media. The claim put Google on the defensive, and the company’s spokespersons could only promise that they would do no such thing.

This only shows the fine line that organizations walk on when they use AI algorithms. When AI systems are not transparent, they don’t even need to make errors to wreak havoc. Even the shadow of a doubt about the system’s performance can be enough to cause mistrust in the system. On the other hand, too much transparency can also backfire and lead to other disastrous results.

O’Neil wrote Weapons of Math Destruction in 2016, before rules like GDPR and CCPA came into effect. Those regulations require companies to be transparent about the use of AI algorithms and allow users to investigate the decision process behind their automation systems. Other developments, such as the ethical AI rules of the European Commission, also incentivize transparency.

Much progress has been made to address the technical, ethical, and legal issues surrounding AI transparency, a lot more still needs to be done. As regulators pass new laws to regulate corporate secrecy, corporations find new ways to circumvent those rules without finding themselves in hot water, such as very long Terms of Service dialogs that inconspicuously deprive you of your right to algorithmic transparency.

Who bears the damage of AI algorithms?

cogs automation

There are plenty of examples of AI algorithms making dumb shopping suggestions, misclassifying images, and doing other silly things. But as AI models become more and more ingrained in our lives, their errors are moving from benign to destructive.

In her book, O’Neil explores many cases where algorithms causing damage to people’s lives. Examples include credit scoring systems that wrongfully penalize people, recidivism algorithms that give heavier sentences to defendants based on their race and ethnic backgrounds, teacher-scoring systems that end up firing well-performing teachers and rewarding cheaters, and trade algorithms that make billions of dollars at the expense of low-income classes.

The impact of an algorithm, combined with its lack of transparency, lend to the creation of a dangerous AI system. For example, O’Neil says, “The new recidivism models are complicated and mathematical. But embedded within these models are a host of assumptions, some of them prejudicial,” and adds, “the workings of a recidivism model are tucked away in algorithms, intelligible only to a tiny elite.”

This basically means that an AI algorithm can decide to keep a person in jail based on their race, and the defendant has no way to find out why they were deemed as ineligible for pardon.

There are two more factors that make the damage of dangerous AI algorithms even more harmful.

First, the data. Machine learning algorithms rely on quality data for training and accuracy. If you want an image classifier to accurately detect pictures of cats, you must provide it with many labeled pictures of cats. Likewise, a loan-application algorithm would need lots of historical records of loan applications and their outcome (paid or defaulted).

The problem is, those who are hurt by AI algorithms are often the people on whom there’s not enough quality data. This is why loan application processors provide better services to those who already have adequate access to banking and penalize the unbanked and underprivileged who have been largely deprived of the financial system.

The second problem is the feedback loop. When an AI algorithm starts to make problematic decisions, its behavior generates more erroneous data, which is in turn used to further hone the algorithm, which causes even more prejudice, and the cycle continues endlessly.

On the topic of policing, O’Neil argues that prejudiced crime prediction causes more police presence in impoverished neighborhoods. “This creates a pernicious feedback loop,” she writes. “The policing itself spawns new data, which justifies more policing. And our prisons fill up with hundreds of thousands of people found guilty of victimless crimes.”

When you create a bigger picture of how all these disparate-and-yet-interconnected AI systems feed into each other, you’ll see how the real harm happens. Here’s how O’Neil summarizes the situation: “Poor people are more likely to have bad credit and live in high-crime neighborhoods, surrounded by other poor people. Once the dark universe of WMDs digests that data, it showers them with predatory ads for subprime loans or for-profit schools. It sends more police to arrest them, and when they’re convicted it sentences them to longer terms. This data feeds into other WMDs, which score the same people as high risks or easy targets and proceed to block them from jobs, while jacking up their rates for mortgages, car loans, and every kind of insurance imaginable. This drives their credit rating down further, creating nothing less than a death spiral of modeling. Being poor in a world of WMDs is getting more and more dangerous and expensive.”

The explosive scale of algorithmic harm

Big Data

“The third question is whether a model has the capacity to grow exponentially. As a statistician would put it, can it scale?” O’Neil writes in Weapons of Math Destruction.

Consider the Google Search example we discussed earlier. Billions of people use Google Search to find answers to important questions about health, politics, and social issues. A tiny mistake in Google’s AI algorithm can have a massive impact on public opinion.

Likewise, Facebook’s ranking algorithms decide the news that hundreds of millions of people see every day. If those algorithms are faulty, they can be gamed to spread fake, sensational news by malicious actors. Even when there’s not a direct malicious intent, they can still cause harm. For instance, news feed algorithms that favor engaging content can amplify biases and create filter bubbles, making users less tolerant of alternative views.

When opaque and faulty Al algorithms determine credit scores for hundreds of millions of people or decide the fate of the country’s education system, then you have all the elements of a weapon of math destruction.

So, what should be done about this? We need to acknowledge the limits of the AI algorithms that we deploy. While having an automated system that relieves you of the duty of making tough decisions might seem tempting, you must understand when humans are on the receiving end of those decisions and how they are affected.

“Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit,” O’Neil writes.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with