The global debate on the use of facial recognition by governments and law enforcement just got a lot more intense. Over the weekend, the New York Times’ Kashmir Hill published an eye-opening piece detailing a relatively unknown firm offering facial recognition services to roughly 600 US law enforcement agencies, with an image library more than seven times larger than that of the FBI.
It’s a Peter Thiel-funded company called Clearview AI, and its service matches faces from images you upload with those in its database of some three billion photos. These pictures have been scraped from ‘millions’ of websites, including Facebook, YouTube, and Venmo. In addition to having a massive database, Clearview AI also boasts the ability to match faces even when you upload imperfect pictures, i.e. taken at odd angles or from a height, like from a surveillance camera.
The tool is said to be able to match faces correctly about 75 percent of the time, and it’s already helped nab criminals. What’s worrying is that it hasn’t been tested for accuracy by any independent party before it’s been made available to police forces.
This sounds like yet another blow to our notion of privacy, and it doesn’t seem like there’s an easy way to rein in such tools. As Stanford Law School privacy professor Al Gidari noted in the piece, there will many more such companies. “Absent a very strong federal privacy law, we’re all screwed,” he added. And that’s just the US.
The entire piece is worth a read, as Hill details the interesting origin story of Clearview AI, and also describes how the service is being used and perceived by law enforcement agencies in the US. Find it here on NYT.