The heart of tech is coming to the heart of the Mediterranean. Join TNW in València this March 🇪🇸

This article was published on August 8, 2018

Uber’s ‘Real-time ID Check’ doesn’t deal well with transgender drivers

Uber’s ‘Real-time ID Check’ doesn’t deal well with transgender drivers Image by: Uber
Tristan Greene
Story by

Tristan Greene

Editor, Neural by TNW

Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: He/him

A transgender woman working as an Uber driver during her transition recently found herself locked out of her account. It wasn’t because of some bigot’s objections, and she’d done nothing to violate the company’s policies. It was because AI sucks at facial recognition – unless of course you’re white and cis-gendered.

In this latest facial recognition SNAFU, originally reported by CNBC, Uber’s software (which runs on Microsoft’s Cognitive Services) flagged driver Janey Webb during a routine check. The company randomly requires drivers to take selfies periodically in order to prove they’re who they claim to be when they log in.

Webb’s facial features have changed significantly over the past few months as part of her transition, and the algorithm can’t reconcile her current appearance with her previous one. This, of course, should not be news to Uber because it’s well documented that AI can’t handle transgender faces.

There’s certainly an argument to be made that knowing facial recognition technology is currently incapable of identifying people of color and trans individuals with the same accuracy as white cis-gendered people, yet still deploying it and requiring all employees to use it, is an act of malicious bigotry.

Because it’s not just Uber. Facial recognition technology is proliferating throughout the US. Police in Florida and Oregon recently trialed a system called Rekognition from Amazon, prompting Brian Brackeen, the CEO of facial recognition AI company Kairos, to tell TNW “I see a world where Amazon Rekognition could send more innocent African Americans to jail.”

That same software can’t tell the difference between a politician and a criminal. Normally we’d say that was actually quite human of it, but in this case the problem wasn’t the politicians. It was their skin. A study conducted by the ACLU revealed a disproportionate number of women and black people were misidentified compared to white men when using Rekognition.

Brackeen believes facial recognition software isn’t ready for the tasks many of his peers are deploying it for. He said:

Imagine a world where we already have problems in society, and now we exacerbate those prejudices with underperforming technology. Even a tiny increase in erroneous match rates of face recognition algorithms, when applied at scale, could mean the difference between literally hundreds-of-thousands to millions of mis-identified individuals.

In Uber’s case, it may just be about catering to the majority of employees because it’s, evidently, easier not to worry about the trans community. The percentage of Uber drivers who are currently transitioning is probably very small and the upside to automating the identification process is alluring. But isn’t that the problem? If Uber’s software doesn’t work for all of its employees, why is it using that software?.

The companies deploying facial recognition technology with total disregard for its technical limitations are, apparently, more interested in profits than civil rights. It wouldn’t be acceptable to sell a smart phone that doesn’t work for white women or open a company break room that gay men aren’t allowed to use. Why is it acceptable for any company to deploy identification software that doesn’t work for its transgender employees?