Join us at TNW Conference 2022 for insights into the future of tech →

Human-centric AI news and analysis

Why are people with nothing to hide so scared of Clearview AI’s facial recognition?

Because the only people with nothing to hide are those whose privacy is guaranteed

Why are people with nothing to hide so scared of Clearview AI’s facial recognition?
Tristan Greene
Story by

Tristan Greene

Editor, Neural by TNW

Tristan covers human-centric artificial intelligence advances, quantum computing, STEM, Spiderman, physics, and space stuff. Pronouns: He/hi Tristan covers human-centric artificial intelligence advances, quantum computing, STEM, Spiderman, physics, and space stuff. Pronouns: He/him

The Australian government gave Clearview AI the boot earlier this week after determining the company had no right to scrape and maintain its citizens’ data.

If you or anyone else has ever uploaded a picture with your face in it to Facebook, Twitter, Instagram, or just about any website, there’s a very good chance you’re in Clearview AI‘s database.

What does that mean? Clearview AI employees, millions of law enforcement agents, and anyone with access to the company’s data (which was recently exposed in a massive breach) can identify you as easily as snapping a photo with a smartphone or uploading a pic to an app.

The Australian government determined that Clearview AI posed a threat to its citizens. Per a report from Gizmodo:

In a statement, Australian Information Commissioner and Privacy Commissioner Angelene Falk said the “covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” claiming it “carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”

Clearview AI founder Hoan Ton-That rebuked the Australian ruling in an email to Gizmodo by claiming his product was an important tool for justice.

Per the same article, Ton-That said:

I grew up in Australia before moving to San Francisco at age 19 to pursue my career and create consequential crime fighting facial recognition technology known the world over. I am a dual citizen of Australia and the United States, the two countries about which I care most deeply.

My company and I have acted in the best interests of these two nations and their people by assisting law enforcement in solving heinous crimes against children, seniors, and other victims of unscrupulous acts.

We only collect public data from the open internet and comply with all standards of privacy and law. I respect the time and effort that the Australian officials spent evaluating aspects of the technology I built.

It’s perfectly reasonable to assume that Clearview AI‘s founder, employees, investors, and partners are all interested in the pursuit of justice.

But, if we’re going to make assumptions, we should make sure they’re informed by evidence and information.

For example, Clearview AI has deep, long-standing connections to right-wing extremists.

Per this 2020 article by Luke O’Brien, Ton-That was a prominent figure in the alt-right movement as far back as 2015:

He had joined forces with far-right subversives working to install Trump as president. They included Mike Cernovich, a Trump-affiliated propagandist who spearheaded the near-deadly Pizzagate disinformation campaign; Andrew “weev” Auernheimer, a neo-Nazi hacker and the webmaster for The Daily Stormer; and Pax Dickinson, the racist former chief technology officer of Business Insider who went on to march with neo-Nazis in Charlottesville, Virginia.

The article continues to point out that Clearview AI‘s “secret co-founder,” white nationalist and avowed racist Chuck Johnson, initially intended for the product to be used by ICE:

In January 2017, Johnson indicated on Facebook that he was “building algorithms to ID all the illegal immigrants for the deportation squads.” Soon, he was boasting to friends and acquaintances that he was working on a powerful facial recognition tool.

But none of this answers the question of why someone with nothing to hide should be concerned with facial recognition.

The answer is: for the same reason people in the US with no intention of shooting anyone should have a problem with being told they can’t keep and bear arms. Freedom from unreasonable government intrusion is in our Constitution because it’s necessary for democracy to thrive.

When the US withdrew from Afghanistan earlier this year, some of its biometric equipment got left behind. As a result, the Taliban gained access not only to hardware capable of scanning fingerprints, irises, and faces, but also to the databases containing information on local civilians.

Anyone who’d been scanned by US military forces was immediately identified to the Taliban. Now, we’ve since learned that the group was able to intercept a list of LGBTQPIA+ people in Afghanistan who’d reached out to aid organizations for fear they’d be discovered.

Thanks to the US military’s biometric equipment and databases, the Taliban doesn’t have to rely on painstaking detective work to find individual members of minority groups they hope to kill, they can simply point at a camera at everyone they see and shoot the ones who the algorithm matches to their list.

But what if you’re in the US, you’re not queer, and you have nothing to hide?

I’ll point you to something TNW’s co-founder, Boris, wrote in one of his newsletters awhile back:

Saying you don’t care about privacy because you have nothing to hide is as selfish as saying you don’t care about people being hungry, because you’re not. You might have the luxury of living in a society where privacy is implied, but that same privacy keeps your society in place.

The first thing dictators tend to do is take away peoples’ privacy and the freedom to think what they want. The fact that you don’t feel like you have secrets means you live in a society where you can be who you want to be. That’s great for you, but should be a reason to care even more. The fewer secrets you have, the more you should value privacy.

As World War II and the Vietnam and Gulf War conflicts taught us, there are limits as to what should be allowed in the pursuit of safety and peace.

Right now, the people deciding whether Clearview AI should be allowed to operate are its own executives and the law enforcement community.

Those might not be the right people to determine the rules of engagement when matters of grave consequence, such as the Constitutionally-protected right to privacy that every US citizen is guaranteed, are what’s at stake.

Ultimately, none of us consented to Clearview AI‘s use of our images. Ton-That’s lined his pockets selling a product built on our photos. And you and I haven’t seen a penny of profit from it.

What Clearview AI is doing may not be illegal in the US, yet, but it’s clearly unethical. Just look at the great lengths the company’s gone to trying to erase any evidence of Ton-That’s ties to right-wing conspiracy theorists, neo-Nazis, and white nationalists.

As it turns out, everyone either has something to hide or something to lose when their privacy is taken away.

Get the Neural newsletter

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Also tagged with