A new AI system aims to make biometric authentication more secure by analyzing facial movements.
The tech requires users to record a short video of them making a unique facial motion. An integrated neural network framework then inspects the footage to learn their features and movements concurrently.
When the user later attempts to access their phone, the system checks that their face matches with the recorded data.
The tech, called Concurrent Two-Factor Identity Verification (C2FIV), was developed by Brigham Young University professor D.J. Lee. He said it could provide a safer verification method than current biometric identifiers:
The biggest problem we are trying to solve is to make sure the identity verification process is intentional. If someone is unconscious, you can still use their finger to unlock a phone and get access to their device or you can scan their retina. You see this a lot in the movies — think of Ethan Hunt in Mission Impossible even using masks to replicate someone else’s face.
In a preliminary study, Lee trained the neural network on 8,000 clips of 50 subjects making various facial movements such as smiling, blinking, and raising their eyebrows. He said the system verified the identifies with over 90% accuracy — which could improve further with a larger training dataset.
While modern facial unlock tech used in modern phones is hard to bypass with these tricks.
The tech isn’t likely to replace the face unlock systems used in modern smartphones, which are getting much harder to fool. But Lee believes C2FIV could be used in a wide range of applications, from online banking to vehicle access:
We could build this very tiny device with a camera on it and this device could be deployed easily at so many different locations. How great would it be to know that even if you lost your car key, no one can steal your vehicle because they don’t know your secret facial action?
HT – Engadget.
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.