Robert Williams was wrongfully arrested earlier this year in Detroit, Michigan on suspicion of stealing five watches from a store. Police responding to the scene of the crime were given grainy surveillance footage of what appeared to be a black male absconding with the items.
Rather than perform an investigation, the police ran the footage through a facial recognition system that determined Williams was the suspect. The police then printed the image from Williamsâ driverâs license and placed it in a âphoto lineupâ with other Black menâs faces.
The police showed the lineup to a security guard at the store where the crime occurred. Despite having not witnessed the crime, the guard decided the individual in the surveillance footage was Williams.
That was enough evidence for the police: Williams was arrested on his front lawn while his wife and two daughters watched.
But Robert Williams was innocent. Facial recognition systems canât properly distinguish between different Black faces.
According to the ACLU:
It wasnât until after spending a night in a cramped and filthy cell that Robert saw the surveillance image for himself. While interrogating Robert, an officer pointed to the image and asked if the man in the photo was him. Robert said it wasnât, put the image next to his face, and said âI hope you all donât think all Black men look alike.â
One officer responded, âThe computer must have gotten it wrong.â Robert was still held for several more hours, before finally being released later that night into a cold and rainy January night, where he had to wait about an hour on a street curb for his wife to come pick him up. The charges have since been dismissed.
If Williams hadnât seen the image for himself, he wouldnât have been able to dispute it as the only piece of âevidenceâ of the crime he was wrongfully accused of. At a minimum, Williams would have been forced to either post bail or stay in jail awaiting trial â a trial where he would have been forced to prove his innocence. At worst he risked being seriously injured or murdered during his arrest.
Sure, the algorithmâs gotten it wrong before. But this time was special. Williams got lucky. The justice system rarely admits it lets computers make decisions.
Police and their attorneys usually bypass the implication that AI tells the cops who to arrest by claiming these systems, facial recognition in this case, are just investigative tools. A human, weâre told, makes the ultimate decision.
Like I said, Williams was lucky. Most people discriminated against by AI never get to see the evidence against them, especially when it canât be represented in a simple-to-understand format like an image.
The problem isnât that this particular AI is racist. The one that cops used in lieu of conducting an actual investigation wasnât anomalous, itâs the norm. All AI is racist. Most people just donât notice it unless itâs blatant and obvious.
Recall Tay, the innocent chatbot Microsoft built to learn from the people it interacted with online. It took no time at all for Tay to become the chatbot version of an online racist. People could easily see that Tay was racist. Microsoft apologized and took it down immediately.
But Tay wasnât designed to produce outcomes for people. Tayâs output wasnât weighed in decision making processes that affect humans. All of Tayâs racism is right up-front where you can see it. Tay was merely an experiment in data bias
The truth is that when robots arenât being explicitly racist by outputting plain-language racial epithets, the general public assumes theyâre unbiased and trustworthy. But racism, as a concept, isnât calling a Black person the ânâ word or drawing a swastika on a Jewish personâs home. Those are acts of racism conducted by racists.
Racism isnât a collection of individual actions that we can point to. Racists are hell-bent on measuring racist acts because it helps create the illusion that racism only exists if we can prove it.
But AI isnât a racist being like a person. It doesnât deserve the benefit of the doubt, it deserves rigorous and constant investigation. When it recommends higher prison sentences for Black males than whites, or when it canât tell the difference between two completely different Black men it demonstrates that AI systems are racist. And, yet, we still use these systems.
Put another way: AI isnât racist because of its biased output, itâs biased because of its racist input and that bias makes it inherently racist to use in any capacity that affects human outcomes. Even if none of the humans working on an AI system are racist, it will become a racist system if given the chance.
An AI that, for example, only determines the air temperature will become a demonstrably racist system if it is adapted in any capacity to produce output impacting outcomes for people of different races where at least one group is white and at least one group is not.
Wherever racial bias is measurable in AI, we find it.
A system trained exclusively on Black faces will typically not be as robust as the same system trained on white faces. And if you train a system on both white and Black faces simultaneously, it will produce better outcomes for white faces.
The reason for this is very simple: AI doesnât do many different things. It sorts and labels. Sometimes it makes guesses. Thatâs about it.
When AI makes inferences, and those inferences involve the potential for racism, it makes racist inferences. This is because white is the default in technology and in many of the societies that have the greatest influence on the field of technology.
We just usually donât notice the racism until itâs as easy to see as Tayâs foul language.
- Predictive policing systems are demonstrably racist. You ask it where crime will happen and it directs you to where police have the densest historical presence. It doesnât predict crime, it demonstrates that cops spend more time policing Black neighborhoods than white ones.
- Sentencing algorithms donât predict recidivism. They show that judges have historically handed down harsher sentences for Black people.
- Hiring algorithms donât choose the best candidate. They choose the candidate that most aligns with previous successful candidates.
- Technologies such as facial recognition, emotion detection, and natural language processing just flat out work better for white men than anyone else.
Itâs only considered acceptable to profit off of and use products that serve white men above all others because racism is the default.
The fact that we still use these racist AI systems indicates that society generally views the concept of better outcomes for whites as acceptable. Thatâs the very definition of systemic racism.
Get the TNW newsletter
Get the most important tech news in your inbox each week.