You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on September 15, 2017

Robots are really good at learning things like racism and bigotry


Robots are really good at learning things like racism and bigotry Image by: clry2

Despite our best efforts, we’re teaching AI the wrong lessons. Bias has crept into our machines and, unless we urge developers to change business as usual, it’s there to stay. We’ve asked the computers to try and see the world as humans do, and they’ve responded by showing the potential to be as racist and ignorant as we are.

Don’t get me wrong, not all robots are bigots; but the existence of the ones that are is an area for concern.

Tay, the Microsoft bot who learned to hate Jewish people and spew racial epithets, was the victim of its own desire to please. Its creators told it to learn from everyone it interacted with, and it learned to be a bigot. Microsoft took a big risk, and Tay may have become the poster-child for AI bigotry, but we all learned something.

AI is no “smarter” than its data. Bullshit in equals bullshit out.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Therein lies the problem with deep-learning: once a developer or company runs out of internal data they have to figure out where to get more data. This is fine if your AI is learning to do taxes – computers have been legit at maths since they figured out the difference between a one and a zero. But what if you’re trying to make predictions about people?

I’m not so worried that robots are going to all become people-hating racists, I’m worried that people will use AI to “prove” ideas born of bigotry.

The real danger is in something called confirmation bias: when you come up with an answer first and then begin the process of only looking for information that supports that conclusion. There’s a whole field of AI dedicated to this, it’s called supervised learning.

If you tell a computer system that all white people like the color orange and then tell it to find meaningful patterns to support that, it will. If you tell it that all black people own solid green t-shirts, it doesn’t dispute you, it’ll go prove you right with whatever patterns it can develop from the data it has.

Again, chances are you won’t run the risk of creating a bigot-bot unless your robot does something like pass judgement on people, draw conclusions based on race, or tries to resurrect phrenology. AI doesn’t have the ability to debate your premise. If you’re looking for evidence that gay people share facial features, and you’re a talented enough developer, you’re going to get some good evidence to support your bad theory.

Sometimes this can function in a way that’s pretty good for science, but little else. Tay, for example, taught us a lot about chatbots, even though it helped to spread hate and unwittingly became a bigot on Twitter.

Blame data

Data always provides patterns but it doesn’t necessarily provide accurate conclusions. You can measure anything, but you can’t use any measurement to prove something.

Take the following example: if the number of women seeking truck driving jobs is less than men, on a job-seeking website, a pattern emerges. That pattern can be interpreted in many ways, but in truth it only means one specific factual thing: there are less women on that website looking for truck driver jobs than men. Yet, we can make an AI find patterns to try and figure out why.

In one scenario we can imagine an AI concluding women are better drivers because there are less looking for work, and thus more already in the workforce. In another we might see the machine determine that women are worse drivers and less likely to apply for those jobs. Depending on the available data the computer will add more patterns.

Theoretically the amount of patterns an AI could see is endless, the computer will literally continue looking for patterns until it has a command to stop, reaches a predetermined stopping point, or runs out of power.

Blame people

Yet, gender has nothing to do with driving — 94 percent of all accidents are caused by people– but the problem is representative of everything wrong with data-based conclusions. We don’t have data on everyone who has ever driven so — no matter what — we’re always looking at a prediction based on limited evidence.

If you predict a person is worse at driving because they’re female you can’t ever prove that they would be better if they were a man. It’s like saying a triangle would make a better circle if it were a circle, but it’s a triangle instead. If you tell an AI to find evidence that triangles are good at being circles it probably will, that doesn’t make it science.

Until computers can turn to their programmer and say “That’s stupid, I’m not doing that,” like a friendly HAL 2000, we’re gonna have to rely on programmers to avoid using computers to confirm their own biases, or letting others do the same.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top