This article was published on November 26, 2018

To achieve ethical AI, we need better training and boundaries


To achieve ethical AI, we need better training and boundaries

Imagine for a moment your plane suddenly doesn’t land where it should, makes a U-turn, and starts re-routing from one airport to another one, trying to land in three different cities. You have no clarity about the immediate future. I’ve been there and it’s not the greatest feeling.

Now imagine, there are no humans involved in the decision-making and all decisions are silently made by a machine. How do you feel now?

The reason why I wrote this article is to propose an understandable regulatory approach for AI applications. Autonomous decision-making systems are in place already, including ones that support life-critical scenarios. This is a complicated topic, so let me tell you exactly how I’ll approach it.

I provide two points of focus that can help us to govern such autonomous decision-making: (1) quality of the data on which machine learning models are trained, and (2) decision boundaries, the restrictions which separate those decisions that should be taken and those that should not.

This point of view will help us pave the way to the question of algorithmic accountability to make AI decisions traceable and explainable.

Super-intelligence, are we there yet?

Short answer is “No”. In his paper, philosopher Nick Bostrom argues that ASI has the capability to bring humans to extinction. Stanford professor Nils Nilsson suggests that we are far from that and, first, machines should be able to do the things humans are able to do.

At the same time, AI solutions are in a numerous narrow fields already capable of making autonomous decisions. In some healthcare applications for example, AI decisions do not require human involvement at all. It means that artificial intelligence is becoming a subject and not an object for decision-making.

How do we govern these decisions? How do we make sure that we can get what we expect, especially in situations that are life-critical?

Decision-making in algorithmic accountability

The concept of the algorithmic accountability suggests that companies should be responsible for the results of the programmatic decisions.

When we talk about ethical decisions of AI, we need to secure “ethical” training datasets, and well-designed boundaries to “ethically” govern AI decisions. These above are the two pillars of algorithmic accountability. In plain English, I could say — a thinking, and an action.

Pillar 1: Training examples and bias

AI could be aware of nuances and can learn more of them without getting tired, however, AI knows only what is “taught” (Data in, Bias in) and controls only what we give it control of. Bias has a huge positive impact on the speed of how humans think and operate. Why do we talk about bias?

If we had to think about every possible option when deciding, it would probably take a lot of time to make even the simplest choice.

Because of the tremendous complexity of the world around us and the amount of information in the environment, it is necessary sometimes to rely on some mental shortcuts or heuristics that allow us to act fast.

How does this relate to AI? In 2016 Microsoft launched an AI-powered bot called Tay, which was responding to tweets and chats on GroupMe and Kik.

Tay had to be shut down in just six hours due to concerns with its inability to recognize when it was making offensive or racist statements. Of course, the bot was not designed to be racist, but it learned from people/users it interacted with. The training data was biased.

From algorithmic stock trading to social security compensations and risk management for loan approvals the use of AI has expanded. Like in Tay’s case, the training datasets are highly prone to contain discriminative footprints which can lead to biased decisions.

Today judges use AI to help inform criminal sentencing, Google reportedly bought Mastercard data to link online ads with offline purchases, hospitals use A.I. to design treatment plans, and the Associated Press uses it draft articles about minor league sports.

As another example, a study by ProPublica found that the software, called COMPAS, used to help determine bail and sentencing decisions, was far more likely to label a black defendant as incorrectly prone to recidivism than a white defendant.

The year before that, an online advertising study found that Google showed fewer ads for high-paying jobs to women then it did to men. There is no bigger concern than bias in algorithmic operations today.

Pillar 2: Boundaries of AI decisions

Let’s now look at how we can evaluate the output of an AI system. American academic, and political activist Lawrence Lessig suggests that our decision space is effectively bound by four factors: (1) architecture, (2) law, (3) markets, and (4) norms. Therefore, we can approach ethical decision-making by explicitly defining set of boundaries to the output of an AI system.

Let’s take the simplistic example of a pedestrian crossing a highway. To govern the decision of a pedestrian, we can impose one or several of the following factors.

  • Architecture: The pedestrian cannot cross the road without climbing the tall fence
  • Law: The pedestrian can be fined by the police/ go to jail for crossing in the wrong place
  • Markets: The pedestrian benefits by saving time OR gets rewarded for crossing in the right place
  • Norms: It is acceptable OR not acceptable in a given society to cross the road in a wrong place

Similarly, the AI decisions could be governed using bounding factors too. Boundaries are important to narrow down the number of outputs (or decisions) chosen from the decision space (the totality of all decision alternatives).

Nonetheless, boundaries do not solve ethical dilemmas, where two or more moral imperatives exist and neither of them is acceptable or preferable.

Ethics? Which one?

​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​Driven by the rapid progress in AI research, machines are acquiring the ability to learn and make decisions in order to perform tasks that were previously believed to be the exclusive domain of the human mind.

As a result, large parts of our lives will be influenced by AI in the near future. Everyone understands already that it is critical that the government, companies, academia, and civil society work together to evaluate the opportunities presented by AI, ensuring that AI benefits all of humanity.

Here in this article, I approached the autonomous decision-making of AI and discussed the two main components that will allow us to bring algorithmic accountability to systems powered by artificial intelligence: control of the training data and control of the decision space.

Bringing the concept of algorithmic accountability to the center of the public dialog is necessary but not sufficient to make algorithmic decisions fully traceable and explainable.

While #AIforGood becomes a popular hashtag, ethicists will legitimately say that ability to objectively judge “right” or “wrong” is valid only within a selected ethical framework and not per se, in general.

If we could know in which ethical framework decisions are made, we can effectively replicate and evaluate them. But this is the story for another article.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top