I. Introduction
Biometric recognition is nowadays in widespread use across forensic, civilian and consumer domains. Likewise, approaches for the testing and performance reporting of biometric systems [1], [2] under normal presentation mode
Normal or routine implies that the system is used in the fashion intended by the system designer [3]. Spoofed trials are considered to be outside of the normal presentation mode.
are well established — see e.g. the ISO/IEC 19795 standard [4]. Despite high reliability, biometric systems are unfortunately not infallible outside of the normal presentation mode, e.g. when they are attacked by an adversary. Since the identification of biometric system attack points more than two decades ago [5], the community has been active in addressing vulnerabilities, especially presentation or spoofing attacks, for all the major biometric modes such as fingerprints [6], face [7], and voice [8], [9]. Well-studied examples of spoofing attacks include printed photographs (face), gummy fingers (fingerprints), and audio replay (voice). Of particular concern are vulnerabilities to DeepFakes [10], stemming from rapid developments in deep learning [11], which can be used to implement face swapping and voice cloning.