Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on February 9, 2019

Facial recognition tech sucks, but it’s inevitable


Facial recognition tech sucks, but it’s inevitable Image by: US CBP

Is facial recognition accurate? Can it be hacked? These are just some of the questions being raised by lawmakers, civil libertarians, and privacy advocates in the wake of an ACLU report released last summer that claimed Amazon’s facial recognition software, Rekognition, misidentified 28 members of congress as criminals.

Rekognition is a general-purpose, application programming interface (API) developers can use to build applications that can detect and analyze scenes, objects, faces, and other items within images. The source of the controversy was a pilot program in which Amazon teamed up with the police departments of two cities, Orlando, Florida and Washington County, Oregon, to explore the use of facial recognition in law enforcement.

In January 2019, the Daily Mail reported that the FBI has been testing Rekognition since early 2018. The Project on Government Oversight also revealed via a Freedom of Information Act request that Amazon had also pitched Rekognition to ICE in June 2018.

Amazon defended their API by noting that Rekognition’s default confidence threshold of 80 percent, while great for social media tagging, “wouldn’t be appropriate for identifying individuals with a reasonable level of certainty.” For law enforcement applications, Amazon recommends a confidence threshold of 99 percent or higher.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

But the report’s larger concerns that facial recognition could be misused, is less accurate for minorities, or poses a threat to the human right to privacy, are still up for debate. And if there’s one thing that’s for certain, it’s that this probably won’t be the last time that a high profile tech company advancing a new technology sparks an ethical debate.

So who’s in the right? Are the concerns raised by the ACLU justified? Is it all sensationalist media hype? Or could the truth, like most things in life, be wrapped in a layer of nuance that requires more than a surface level understanding of the underlying technology that sparked the debate in the first place?

To get to the bottom of this issue, let’s take a deep dive into the world of facial recognition, its accuracy, its vulnerability to hacking, and its impact on the right to privacy.

How accurate is facial recognition?

Before we can assess the accuracy of that ACLU report, it helps if we first cover some background on how facial recognition systems work. The accuracy of a neural network depends on two things: your neural network and your training data set.

  • The neural network needs enough layers and compute resources to process a raw image from facial detection through landmark recognition, normalization, and finally facial recognition. There are also various algorithms and techniques that can be employed at each stage to improve a system’s accuracy.
  • The training data must be large and diverse enough to accommodate potential variations, such as ethnicity or lighting.

Moreover, there is something called a confidence threshold that you can use to control the number of false positive and false negatives in your result. A higher confidence threshold leads to fewer false positives and more false negatives. A lower confidence threshold leads to more false positives and fewer false negatives.

Revisiting the accuracy of the ACLU’s take on Amazon Rekognition

With this information in mind, let’s return to that ACLU report and see if we can’t bring clarity to the debate.

In the US and many other countries, you’re innocent until proven guilty, so Amazon’s response highlighting improper use of the confidence threshold checks out. Using a lower confidence threshold, as the ACLU report did, increases the number of false positives, which is dangerous in a law enforcement setting. It’s possible the ACLU did not take into consideration the fact that the default setting for the API should have been corrected to match the intended application.

That said, the ACLU also noted: “the false matches were disproportionately of people of color…Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress.” Amazon’s comment about  the confidence threshold does not directly address the revealed bias in their system.

Facial recognition accuracy problems with regards to minorities are well known to the machine learning community. Google famously had to apologize when its image-recognition app labeled African Americans as “gorillas” in 2015.

Earlier in 2018, a study conducted by Joy Buolamwini, a researcher at the MIT Media Lab, tested facial recognition products from Microsoft, IBM, and Megvii of China. The error rate for darker-skinned women for Microsoft was 21 percent, while IBM and Megvii were closer to 35 percent. The error rates for all three products were closer to 1 percent for light-skinned males.

In the study, Buolamwini points out that a data set used to give one major US technology company an accuracy rate of more than 97 percent, was more than 77 percent male and more than 83 percent white.

This highlights a problem where widely available benchmark data for facial recognition algorithms simply aren’t diverse enough. As Microsoft senior researcher Hanna Wallach stated in a blog post highlighting the company’s recent efforts to improve accuracy across all skin colors:

If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases.

The key takeaway? The unconscious bias of the (nearly exclusively white and male) designers of facial recognition systems puts minorities at risk of being misprofiled by law enforcement.

Focusing on the quality and size of data used to train neural networks could improve the accuracy of facial recognition software. Simply training algorithms with more diverse datasets could alleviate some of the fears of misprofiling minorities.

Can facial recognition be hacked?

Yes, facial recognition can be hacked, the better question is how. As a type of image recognition software, facial recognition shares many of the same vulnerabilities. Image recognition neural networks don’t “see” the way we do.

You can trick a self driving car into speeding past a stop sign, by covering the sign with a special sticker. Add a human-invisible layer of data noise to a photo of a school bus to convince image recognition tech it’s an ostrich.

You can even impersonate an actor or actress with special eyeglass frames to bypass a facial recognition security check. And let’s not forget the time security firm Bkav hacked the iPhone X’s Face ID with “a composite mask of 3-D-printed plastic, silicone, makeup, and simple paper cutouts.”

To be fair, tricking facial recognition software requires extensive knowledge about the underlying neural network and the face you wish to impersonate. That said, researchers at the University of North Carolina recently showed that there’s nothing stopping hackers from pulling public pictures and building 3D facial models.

These are all examples of what security researchers are calling ‘adversarial machine learning’.

As AI begins to permeate our daily lives, it’s important for cybersecurity professionals to get into the heads of tomorrow’s hackers and look for ways to exploit neural networks so that they can develop countermeasures.

Facial recognition and data privacy

In the wake of the coverage of Facebook’s three largest data breaches last year, where some 147 million accounts are believed to have been exposed, you’d be forgiven for missing details on yet another breach of privacy, where Russian firms scraped together enough data from Facebook to have their own mirror of the Russian portion of Facebook.

It’s believed that the data was harvested by SocialDataHub to support sister firm Fubutech, which is building a facial recognition system for the Russian government. Still reeling from the Cambridge Analytica scandal, Facebook has found itself an unwitting asset in a nation state’s surveillance efforts.

Facebook stands at the center of a much larger debate between technological advancement and data privacy. Advocates for advancement argue facial recognition promises better, more personalized, solutions in industries such as security, entertainment, and advertising. The airline Qantas hopes to one day incorporate emotional-analytics technology into their facial recognition system, to better cater to the needs of both passengers and flight staff alike.

But privacy advocates are concerned with the ever present danger of the Orwellian surveillance state. Modern China is starting to look like a Black Mirror episode. Beijing achieved 100 percent video surveillance coverage in 2015, facial recognition is being used to fine jaywalkers instantly via text, and a new social credit system is already ranking some citizens on their behavior. Privacy advocates are worried this new surveillance state will turn political and be used to punish critics and protesters.

More broadly, we as a society have to decide how we use facial recognition and other data driven technologies, and how that usage stacks up with Article 12 of The Universal Declaration of Human Rights:

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks.

With great technology comes great responsibility

I’ve covered a lot of the issues surrounding facial recognition technology, but it’s important to remember what we as a society stand to gain. In many ways facial recognition is the next logical step in the advancement of:

  • Social media, which has led to a greater sense of community, shared experience, and improved channels for communication
  • Advertising, where facial recognition can take personalization, customer engagement, and conversion to the next level
  • Security, where biometrics offer a unique package of both enhanced security and convenience for the end user
  • Customer service, where facial recognition can be paired with emotional analytics to provide superior customer experience
  • Smart cities, where ethical use of surveillance, emotional analytics, and facial recognition can create safer cities that respect an individual’s right to privacy
  • Robotics, where a Star Trek-esque future with robot assistants and friendly androids will only ever take place if we master the ability for neural networks to recognize faces

Great technology comes with great responsibility. It’s in the interest of both privacy advocates and developers to improve data sets and algorithms and guard against tampering. Resolving conflicts between the human right to privacy and the advantages gained in convenience, security, and safety is a worthwhile endeavor. And at the end of the day, how we choose to use facial recognition, is what really matters.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with