Vishwam Sankaran is a former Editorial Fellow with The Next Web, currently based in Bangalore, India. Vishwam Sankaran is a former Editorial Fellow with The Next Web, currently based in Bangalore, India.
Researchers from New York University have created a set of master fingerprint keys that can be used to spoof biometric identification systems.
While the database of fingerprints used by the researchers had a chance of falsely matching with a random fingerprint one out of 1000 times, the master prints they generated had the power to falsely match one out of five times.
Their paper was published on the pre-print server ArXiv and proves that fingerprints can be artificially generated using machine learning and used to trick databases secured by fingerprint authentication.
This is alarming because a growing number of devices, and large scale databases like India’s Aadhar, use digital fingerprinting to uniquely identify users – and could potentially be targeted with such ‘master key’ fingerprints by identity thieves.
A report published last year by Counterpoint Research indicated that more than 50 percent of smartphones shipped in 2017 had fingerprint sensors in them, and predicted that the figure would rise to 71 percent by the end of this year.
The problem is that these sensors obtain only partial images of users’ fingerprints – at the points where they make contact with the scanner. The paper noted that since partial prints are not as distinctive as complete prints, the chances of one partial print getting matched with another is high.
The artificially generated prints, dubbed DeepMasterPrints by the researchers, capitalize on the aforementioned vulnerability to accurately imitate one in five fingerprints in a database. The database was originally supposed to have only an error rate of one in a thousand.
Another vulnerability exploited by the researchers was the high prevalence of some natural fingerprint features such as loops and whorls, compared to others. With this understanding, the team generated some prints that contain several of these common features. They found that these artificial prints were more likely to match with other prints than would be normally possible.
Using these most-repeated features, the neural networks also generated fake prints that convincingly look like a real fingerprint.
The DeepMasterPrints can be used to spoof a system requiring fingerprint authentication without actually requiring any information about the user’s fingerprints. As the paper noted about the application of the fake prints:
Therefore, they can be used to launch a dictionary attack against a specific subject that can compromise the security of a fingerprint-based recognition system.
Mikko Hypponen, a cyber security expert and columnist, took to twitter to articulate the significance of this vulnerability in commonly used biometric systems.
Interesting research on creating synthetic fingerprints that can match a large number of real fingerprints. These would be Master Prints, just like we have Master Keys for locks. #GAN https://t.co/YzNjfHzZpB pic.twitter.com/2n39On45pP
— Mikko Hypponen (@mikko) November 13, 2018
There have always been cat-mouse chases between cyber security measures and tools capitalizing on their vulnerabilities, so this shakeup has been imminent. But it will be interesting to see if this method of using common biometric features can be used to spoof other types of systems such as iris scanners.
Another thing to look out for is the security of public databases that rely solely on biometric scanners for security. Your friendly neighborhood burglar is unlikely to make such master prints to access information from your phone. But large scale databases such as those used by governments to ID citizens could potentially be spoofed more easily by ambitious criminals – Aadhar, we are looking at you.
Get the TNW newsletter
Get the most important tech news in your inbox each week.