This is part of a series of blogs diving into the technical aspects of Veridium’s distributed data model, biometrics, and computer vision research and development by our chief biometric scientist Asem Othman.
A biometric system is a pattern recognition system that has vulnerable points and threats, same as password- or token-based authentication systems. In 2001, IBM researchers identified and categorized these types of attacks in their paper “Enhancing security and privacy in biometrics-based authentication systems.”
These attacks are intended to either circumvent the security afforded by the system or to deter the normal functioning of the system:
- Presentation Attack – Spoofing the biometric trait, such as with a finger mold, presented at the sensor.
- Replay Old Data – Resubmitting illegally intercepted data to the system.
- Override Feature Extractor – Overriding the feature extractor to produce predetermined feature sets.
- Synthesized Feature Vector – Replacing legitimate feature sets with synthetic feature sets.
- Override Matcher – Overriding the matcher to output high scores, thereby defying the system security.
- Modify Template – Compromising the templates stored in the database. Alternately, introducing new templates to the database.
- Intercept the Channel – Altering the data in the communication channel between various modules of the system.
- Override Final Decision – Overriding the final decision output by the biometric system.
Several security techniques exist to thwart attacks at various points (2-5,7, and 8), including encrypting communication channels, using mutual authentication, placing the feature extractor and the matcher in secure locations, and limiting unsuccessful attempts.
Other attacks points (1 and 6) require specific focus in biometrics. In our previous blog, we discussed the biometric template protection scheme you can use to avoid attacks on point 6. Here, we will discuss adversary attacks on point 1. This vulnerability can lead to the possibility that a determined impostor would be able to masquerade as an enrolled user by using a physical artifact of a legitimately enrolled user, known as presentation attacks. Alternatively, an individual may also deliberately manipulate his or her biometric trait in order to avoid detection by an automated biometric system.
Presentation attacks are a vulnerability in most biometric systems and are commonly known as “spoofing.” In 2016 the first ISO standard, ISO/IEC 30107-1:2016, was published on biometric presentation attack detection. This standard defines presentation attacks as the “presentation of an artifact or human characteristic to the biometric capture subsystem in a fashion that could interfere with the intended policy of the biometric system.” An example of a presentation attack is the use of an artificial biometric or artifact to spoof a biometric device, such as wearing a mask.
For example, the iPhone received attention when its fingerprint reader, located in the home button of the phone, was spoofed. While the threat is clearly there, there has been little evidence that widespread fraud based on spoofing has occurred in biometric systems.
Efforts to Spoof
It is quite worrisome that hackers may have access to an individual’s biometric data, such as the stolen database of fingerprint images in the US Office of Personnel Management hack of 2015, and like other private data that is stolen, the uses of this data can be quite damaging to an individual. However, there is a misconception that a stolen fingerprint is the equivalent of a stolen password. A password can be input quite simply by entering the characters through any keyboard. A biometric, in concept, needs to be entered through a biometric capture device.
Biometric images are not entered directly. Instead, the stolen images would need to be converted into a spoof artifact that can be used with the specific image capture module. In other words, while approaches to using a stolen fingerprint database for an attack are possible, they are not as simple to mount as entering a stolen password is.
Therefore, it is important to perform an analysis of the specific application which uses biometric recognition when deciding whether this type of presentation attack has significant risk and to determine the best ways to mitigate which are balanced in terms of the consequence/efforts tradeoff.
The “effort” you need to account for is the combination of time, knowledge, and resources it takes to perform a presentation attack on a biometric system. Level of effort to perform such attacks depends on the targeted biometric trait and tied to the potential consequences of attacking the authentication system.
Spoofing Different Biometric Traits
Biometric systems where identifiable biometrics, such as faces or voice patterns, are more prone to attacks than other biometrics, such fingerprint. Lifting an impression of a fingerprint from a surface needs more effort and skill than Googling an image of a person’s face to use. Further, the cost of launching a face spoof attack, using a printed photo, displayed photo, or replayed video, is relatively low compared to manufacturing spoof fingers using molds or putty. There is also, of course, the risk of having an evil twin, though this would be a zero-effort attack.
The Consequences and Cooperative Attacks
Reports of spoof attacks in active identity management systems have included a Brazilian doctor who, in 2013, used fake fingers made of silicone to sign in absent colleagues.
In this attack, although fingerprints are much harder to spoof than face, the cooperation between the genuine users and the attacker lowered the level of efforts needed to fool the system. The consequence of attacking such a system is not badly influencing the genuine users but in fact the opposite. It is a privilege to authenticate to the system while they are not there in-person.
On the other hand, if the consequence of an attack will hurt the genuine user, such as attacking their bank accounts, that’s when the users will not cooperate and do their best to protect the secrecy of their biometric. The consequence and the possibility of cooperation between genuine users and attackers will have a direct impact on changing the level of efforts needed to attack the biometric traits.
Understanding the level of effort needed to spoof a biometric system and the consequences of the attack are needed to select the best methods for the mitigation of presentation attacks.
Mitigation of Presentation Attacks
Mitigation can be accomplished using a combination of varied approaches. The use of multiple factors (a biometric and a password), multiple biometrics, limited number of authentication attempts, performing biometric collection under the supervision of trained operators, and finally a challenge-response approach where the users are asked to say a specific phrase, make a facial expression, or move their hand in a specific way. Most of these approaches are intrusive approaches that may affect the usability and convenience of biometric-based authentication systems, especially mobile biometric systems.
Hence, ISO standards define and highlight the needs for presentation attack detection (PAD) methods that automatically determine such attacks. Note that ISO standards have been significant to developing a common terminology because PAD methods have been called many names such as liveness detection, artifact detection, spoof detection, or anti-spoofing, among others. The goal of PAD methods is to determine if the presented biometric data during the authentication is from the authorized person who is present at the time of capture. PAD methods are non-intrusive techniques that can be categorized into hardware-based methods or software-based methods.
Hardware-based methods are based on using special sensors to measure the life signs such as multispectral fingerprint sensors or the new iPhone X’s infrared and 3D sensors. On the other side, software-based methods analyze the captured biometric image or signal for evidences of spoofing.
So What’s Next?
The biometric industry needs to assess the types of attacks in order to develop solutions for and to test robustness against them. Presentation attacks can be separated into levels based an increasing level of difficulty to mount based on time, skill, and equipment. Also, the consequence and the possibility of cooperation between the genuine users and attackers are main factors that have to be considered when developing the mitigation methods.
Biometric academic and standard communities need to develop a robust testing framework and performance metrics that the biometrics industry and application architects can use. Recently, NIST has been leading these efforts, along with ISO standards communities.