Spoof Trace Disentanglement for Generic Face Anti-Spoofing | IEEE Journals & Magazine | IEEE Xplore

Spoof Trace Disentanglement for Generic Face Anti-Spoofing


Abstract:

Prior studies show that the key to face anti-spoofing lies in the subtle image patterns, termed “spoof trace,” e.g., color distortion, 3D mask edge, and Moiré pattern. Sp...Show More

Abstract:

Prior studies show that the key to face anti-spoofing lies in the subtle image patterns, termed “spoof trace,” e.g., color distortion, 3D mask edge, and Moiré pattern. Spoof detection rooted on those spoof traces can improve not only the model's generalization but also the interpretability. Yet, it is a challenging task due to the diversity of spoof attacks and the lack of ground truth for spoof traces. In this work, we propose a novel adversarial learning framework to explicitly estimate the spoof related patterns for face anti-spoofing. Inspired by the physical process, spoof faces are disentangled into spoof traces and the live counterparts in two steps: additive step and inpainting step. This two-step modeling can effectively narrow down the searching space for adversarial learning of spoof trace. Based on the trace modeling, the disentangled spoof traces can be utilized to reversely construct new spoof faces, which is used as data augmentation to effectively tackle long-tail spoof types. In addition, we apply frequency-based image decomposition in both the input and disentangled traces to better reflect the low-level vision cues. Our approach demonstrates superior spoof detection performance on 3 testing scenarios: known attacks, unknown attacks, and open-set attacks. Meanwhile, it provides a visually-convincing estimation of the spoof traces. Source code and pre-trained models will be publicly available upon publication.
Page(s): 3813 - 3830
Date of Publication: 20 May 2022

ISSN Information:

PubMed ID: 35594228

Funding Agency:


1 Introduction

In recent years, the vulnerability of face biometric systems has been widely recognized and increasingly brought attention to the computer vision community. The attacks attempt to deceive the systems to make wrong identity recognition: either recognize the attackers as a target person (i.e., impersonation), or cover up the original identity (i.e., obfuscation).

Contact IEEE to Subscribe

References

References is not available for this document.