This article was published on August 8, 2016

New facial recognition algorithm is so smart it doesn’t need to see your face


New facial recognition algorithm is so smart it doesn’t need to see your face

Facial recognition already posed serious problems for privacy advocates. Used by everyone from law enforcement to churches, the privacy concerns with facial recognition are very real, and they’re about to get a lot worse.

The ability to identify anyone just by analyzing an image of their face creates a severe imbalance of power from the common citizen to the people in charge. The ability to identify those whose faces are blurred or otherwise obstructed kills that balance entirely. Yet that’s exactly what algorithms like the ‘Faceless Recognition System’ (FRS) are aiming to do.

FRS was a creation by researchers at the Max Planck Institute in Saarbrücken, Germany. The idea was to create a method of identifying individuals through use of imperfect — blurry or otherwise obscured — images. The system trains a neural network on a set of photos containing obscured and unobscured images before using that training to spot similarities from a target’s head and body.

Screen Shot 2016-08-08 at 11.20.10 AM

It’s crazy accurate too. The algorithm is able to find an obscured face after seeing an unobscured version of the same face only once at a 69.6 percent accuracy rate. If the machine has 10 images of the person’s face, the accuracy rate climbs to 91.5 percent.

There are, however, limitations. For example, black boxes obscuring a person’s face drop the accuracy rating down to about 14.7 percent, but even that is three times more accurate than humans.

It’s not just one algorithm, either. Facebook has its own facial recognition algorithms that can reportedly identify users with obscured faces at an 83 percent accuracy rate. To do so, it uses cues such as stance and body type. The Faceless Recognition system, however, might be the first fully trainable system that uses a full range of body cues to correctly identify targets.

The researchers recognize the privacy concerns:

From a privacy perspective, the results presented here should raise concern. It is very probable that undisclosed systems similar to the ones described here already operate online. We believe it is the responsibility of the computer vision community to quantify, and disseminate the privacy implications of the images users share online.

In theory, the statement is a good one. The community should police its own creations.

In practice, however, it’s only a matter of time — if it hasn’t happened already — before these algorithms end up in the hands of government, law enforcement and military around the world. At that point, we’re all living in a non-hyperbolic version of 1984.

via Motherboard

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with