This article was published on May 23, 2018

Facial recognition company CEO explains why government surveillance is bad for privacy


Facial recognition company CEO explains why government surveillance is bad for privacy

It’s politics as usual in the US this week. The president is on Twitter complaining about the Federal government spying on him while Jeff Bezos is riding the Amazon cash-cow all the way to Surveillance-ville, Florida. Caught in the middle is the very idea of privacy, something Brian Brackeen, CEO and founder of Kairos, thinks is under assault.

Here’s the thing: Kairos is an AI startup that specializes in facial recognition technology. So when Brackeen told TNW he was “calling for an end to face recognition-enabled surveillance,” because “it is wrong,” we felt it was worth hearing him out.

It isn’t everyday a CEO tells us they can do something Sundar Pichai and Jeff Bezos can’t: turn down a lucrative government contract to develop ethically questionable AI.

Referring to recent news Amazon is aiding US law enforcement agencies in deploying facial recognition AI, Brackeen told TNW:

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

It’s disappointing to see a household name like Amazon completely drop the ball here. As the CEO of an independent face recognition company, we have always committed ourselves to think deeply about how our technology might impact people’s lives. Even choosing not to work with certain customers, if we see signals for potential abuse. We won’t chase profits at the cost of human rights. I continue to advocate for regulation in this area, we cannot go on as an industry ‘un-checked’.

So what’s the big deal? Why is it that privacy experts, tech CEOs, and the ACLU are all terrified of the government using AI-powered facial recognition tech?

In a nutshell: facial recognition software catches bad guys in the same way a giant physical net would. Instead of treating all people as equally innocent – meaning they’re entitled to their privacy until they do something wrong – it treats everyone as equally suspect by examining all of us and then allowing an algorithm to decide whether we’ve done something wrong.

The idea that people who’ve done nothing wrong shouldn’t be worried doesn’t apply when the invasion of privacy is universal, as it’s becoming in the US and is already in China. This is because we haven’t solved the issue of bias, not everybody is working outside of the black box, and there’s little in the way of peer-review for algorithms that are constantly under development.

Brackeen explains the problem:

Imperfect algorithms, non-diverse training data, and poorly designed implementations dramatically increase the chance for questionable outcomes. Surveillance use cases, such as face recognition enabled body-cams, ask too much of today’s algorithms. They cannot provide even adequate answers to the challenges presented of applying it in the real world. And that’s before we even get into the ethical side of the argument.

And the ethical implications, according to Brackeen, should concern all Americans:

Amazon Rekognition claims the ability to search and track people in real-time, using a many-millions deep database of faces, essentially automating law enforcement surveillance capabilities at an unprecedented scale. Regular Americans are now unwittingly open to be analyzed and categorized based on their appearance. This ‘intelligence’ could then be used to ‘flag’ individuals for further assessment by authorities. At public gatherings; sporting events, national celebrations and even political rallies or protests, people’s privacy is at risk.

Schoolchildren in China are now under surveillance by AI capable of performing emotional recognition and analysis. As we pointed out before, this gives the government a fantastic lie-detector capable of determining whether a child will self-identify as gay or protest the government.

It could be argued that the problem of bias is exacerbated in the US. AI that targets immigrants has been shown to contain human bias. And Pro Publica nearly won a Pulitzer for its reporting on how AI used by the justice system showed human bias against non-whites.

Unless we solve human bias – bigotry and racism – we can’t guarantee it’ll be kept out of our algorithms. And that means AI that could potentially perform sweeping categorization of people stands to automate bigotry at a truly monstrous scale.

Brackeen isn’t just worried about his company, he told us:

Beyond my CEO role, as a black man, the severity of these claims is of major concern to me. Amazon’s push to work with Government agencies and law enforcement groups encourages the use of surveillance to target communities of color.

Imagine a world where we already have problems in society, and now we exacerbate those prejudices with underperforming technology. Even a tiny increase in erroneous match rates of face recognition algorithms, when applied at scale, could mean the difference between literally hundreds-of-thousands to millions of mis-identified individuals. The biases that exist in law enforcement pre exist Face Recognition technology, yet when the systems themselves are unintentionally biased due to improperly trained algorithms, the combination can be hugely damaging.

The future of such cooperations between giant technology companies and the US government isn’t a bright one, according to Brackeen:

I see a world where Amazon Rekognition could send more innocent African Americans to jail.

Get the TNW newsletter

Get the most important tech news in your inbox each week.