Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on January 13, 2020

The benefits of facial recognition AI are being wildly overstated


The benefits of facial recognition AI are being wildly overstated

Facial recognition technology has run amok across the globe. In the US it continues to perpetuate at an alarming rate despite bipartisan push-back from politicians and several geographical bans. Even China’s government has begun to question whether there’s enough benefit to the use of ubiquitous surveillance tech to justify the utter destruction of public privacy.

The truth of the matter is that facial recognition technology serves only two legitimate purposes: access control and surveillance. And, far too often, the people developing the technology aren’t the ones who ultimately determine how it’s used.

Most decent, law-abiding citizens don’t mind being filmed in public and, to a certain degree, would tend to take no exception to the use of facial recognition technology in places where it makes sense.

For example, using FaceID to unlock your iPhone makes sense. It doesn’t use a massive database of photos to determine the identity of an individual, it just limits access to the person it’s previously identified as being the authorized user.

Facial recognition in schools also makes sense. Campuses should be closed to anyone who isn’t authorized and guests should be flagged upon entry. This use of facial recognition – at entry and exit points only – relies on people’s up-front consent to having their images added to a database.

However, when facial recognition is used in public thoroughfares such as airports, libraries, hospitals, and city streets it becomes a surveillance tool – one often disguised as an access control mechanism or a ‘crime prevention’ technique.

In airports, for example, facial recognition is often peddled as a means to replace boarding passes. CNN’s Fancesca Street pointed out last year that some airliners were implementing facial recognition systems without customers’ knowledge.

Airports and other publicly-trafficked areas often implement systems from companies that claim their AI can stop, prevent, detect, or predict crimes.

There’s no such thing as an AI that can predict crime. Hundreds of venture capitalists and AI-startup CEOs out there may beg to differ, but the simple fact of the matter is that no human or machine can see in to the future (exception: wacky quantum computers).

AI can sometimes detect objects with a fair modicum of accuracy – some systems can determine if a person has a cell phone or firearm in their pockets. It can potentially prevent a crime from occurring by limiting access, such as locking doors if a firearm is detected until a human can determine if the threat is real or not.

But AI purported to predict crimes are simply surveillance systems built on prestidigitation. When law enforcement agencies claim they use crime-prediction software, what they really mean is that they have a computer telling them that places where lots of people have already been arrested are great places to arrest more people. AI relies on the data it’s given to make guesses that will please its developers.

When airports and other public thoroughfares employ facial recognition, those responsible for deploying it almost always claim it will save time and lives. They tell us the system can scan crowds for terrorists, people with ill-intent, and criminals at-large. We’re lead to believe that thousands of firearms, bombs, and other kinds of threats will be subverted if we use their technology.

But what real benefit is there? We’re operating under the assumption that every second could be our last, that we’re in danger every time we enter into a public space. We’re seemingly faced with the life-and-death choice to either have privacy or live through the experience of exposing ourselves to the general public.

Reason and general statistics would tell us this can’t possibly be the case. In fact, you’re more likely to die of disease, a car accident, or a drug overdose than you are to be murdered by a stranger or killed by terrorist.

It would seem that the benefit’s measurable success – one company says it found about 5,000 threats while scanning more than 50 million people – doesn’t outweigh the potential risks. We have no way of knowing what the literal results of those 5,000 threats would have been, but we do know exactly what can happen when government surveillance technology is misused.

TNW’s CEO, Boris Veldhuijzen Van Zanten, had this to say about our privacy in a post he wrote about people who think they have nothing to hide:

Before WWII, the city of Amsterdam figured it was nice to keep records of as much information as possible. They figured; the more you know about your citizens, the better you’ll be able to help them, and the citizens agreed. Then the Nazis came in looking for Jewish people, gay people, and anyone they didn’t like, and said ‘Hey, that’s convenient, you have records on everything!’ They used these records to very efficiently pick up and kill a lot of people.

Today, the idea of the government tracking us all through the use of facial recognition software doesn’t seem all that scary. If we’re good people, we have nothing to worry about. But what if bad actors or the government doesn’t think we’re good people? What if we’re LGBTQIA+ in a state or country where the government is allowed to discriminate against us?

What if our government, police, or political rivals create databases of known gays, Muslims, Jews, Christians, Republicans who support the 2nd amendment, doctors willing to perform abortions, “Antifa” or “Alt-right” activists, and uses AI to identify, discriminate against, and track people they deem their enemy. History tells us that these things aren’t just possible, so far they’ve been inevitable.

We’re careening past the time for regulation and towards the point of certain regret.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with