Presentation Attack Detection PAD – Liveness Detection of faces
Presentation Attack Detection – PAD is the task of determining whether the attempt of being recognized via facial recognition is made by a genuine person or by an “artefact” that tries to fake the system. It is better known as “face live detection” or “liveness detection”, but that is scientifically spoken not totally correct. It is not about detecting whether the person is actually alive, it is about finding out whether someone wants to spoof the system by pretending to be someone else.
This is a new challenging problem for face detection. Now that we can quite reliably detect faces in images, we need to find out whether this was a “real” face or an impostor trying to fake us.
There are several levels of fake attempts that can happen within a face recognition process. Such a false presentation is also called a PAI, a Presentation Attack Instrument. Such presentation attack instruments can be grouped as follows:
- A planar photo
- A bent photo
- A video
- An avatar replayed by a video controlled by humans
- Avatars projected onto artificial heads
- Physical masks
Many algorithms have already been developed to tackle this scenario. An incomplete list is:
- Using a 3D camera will at least reject planar photo attacks
- Stereometry using at least 2 images can accomplish the same
- Intrinsic movements of the face can be determined (such as eye blinking, smiling, etc)
- Head movements can be used for challenge response mechanisms
- Surface characteristics (for paper, screens, etc) can be investigated
Current commercial applications use a 3D camera to solve the problem. But there are major drawbacks for this solution. Adding a 3D camera to a smartphone adds a few hundred bucks to the overall price. And even then, this is not totally secure for anti-spoofing protection. A 3D camera would not be able to reject physical masks, or Makeup, for example…
This is a very interesting current field of development right now. Even the National Institute of Standards and Technology NIST is taking care of the situation and defines test scenarios for this in their Presentation Attack Detection draft with the ISO/IEC 30107 standard.
Other useful presentations about this topic are coming from Germany. Please start to have a look here: Christoph Busch lecture about PAD and just google the author to find more…
Deepfakes and their meaning to PAD
Deepfakes are fake images or videos created with Deep Learning methods, i.e. deep neural networks. They are very hard to detect, and the technology is evolving rapidly. Nevertheless, a deepfake attack is not really a new challenge for Presentation Attack Detection. Because the deepfake created still needs to be presented, through a mobile device or any other display. And then, a deepfake is not more dangerous than a “normal” presentation attack.
Deepfake detection methods
- A desktop application must detect and disallow the use of virtual cameras. An easy way is to inspect the camera’s name and use a blacklist. Please note that it is not very hard to change that camera name in the system manually.
- If you are unsure whether this was a fake image or not, refer to the Reverse Image Searching section on this website.
Software for Face Liveness Detection
Now since this is a very new challenge in face recognition, you cannot find very many vendors that provide anti-spoofing software for integration into existing apps. Here is a list of SDKs / Web Services that I am aware of (please contact me if you know more!):
Face Liveness Detection Vendors
|BioID||BioID is offering a (partially free) Web Service for liveness detection based on motion and texture features.|
|Meerkat||Meerkat offers intrinsic face movement fake detection|
|NEUROtechnology||NEUROtechnology offers active and passive fake detection|
|ZoOm||ZoOm uses a combination of 3D and texture|
Overall rating of users about the topic of Presentation Attack Detection also known as Liveness Detection
User Review( votes)