Facial recognition systems are showing up everywhere it seems. While it’s not only used for security (at the Soccer Hall of Fame it will personalize visitor experience), that is its primary purpose. There have been many deployments at large venues, like airports around the world — it already caught an imposter in the U.S. — and more coming, including the 2020 Olympics in Tokyo. It’s showing up on smartphones as well, replacing fingerprint scanners. And soon it will also be added to the Chrome OS for use on Chromebooks.
Because of these systems’ ubiquity and their use as security, it is important to know what the weaknesses of facial recognition are.
Automatic face recognition continues to pose a challenging problem when it comes to image analysis and computer vision. The first big problem is that facial recognition isn’t done with a close-up scan, as is the case with fingerprints or iris scans. It’s done at varying distances, in a whole range of environments — indoors and out, night, day, and with lighting conditions almost certainly different from the photo that is being used for comparison.
Furthermore, things like facial pose (or camera viewpoint), facial expression, and occlusions (sunglasses or other coverings — including makeup), all add to the difficulty of getting a correct match. In unconstrained scenarios, where face image acquisition is not well controlled, or where subjects may be uncooperative, the factors affecting appearance will confound the performance of face recognition.
Then there’s the fact that while other biometrics — like fingerprints and iris scans — don’t change over a person’s life, faces do. This can be the result of age, facial hair, illness, and/or gaining weight, it doesn’t matter – they all make it more difficult for facial recognition to work well.
Moreover, there may be similarities between the face images of different people, especially if they are genetically related. Such similarities further compound the difficulty of recognizing people based on their faces.
So far, commercial facial recognition systems have identification problems that are tantamount to racial and gender bias. A 2018 study found a false identification rate of 34.7% for darker skinned females vs. a 0.8% rate for lighter skinned males. As the researchers note: “The substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms.”
To be clear, facial recognition systems are improving and hopefully, researchers will develop solutions to these issues. Until they do though, facial recognition will continue to have large problems.