Skip to Main Content
The paper focuses on “personally identifiable information” (PII) and interoperability. The emphasis is on quantitative performance analysis and validation for uncontrolled operational settings and image quality, and variable demographics and gallery composition. Biometric face authentication involves three distinct operational stages, those of face space learning (“training”), gallery enrollment, and testing (“querying”). The authentication tasks considered here are face identification and verification. Performance evaluation involves k-fold cross-validation using both PCA and LDA for face space representation. Our basic findings indicate that (a) training to learn the face space is less important than the quality of images during enrollment and testing;(b) exclusion of first eigenvectors in defining the face space improves performance particularly for the PCA face space and lesser quality data; (c) the size of the subject gallery affects performance; and (d) it does not make much difference if the face space is derived from biometric data coming from the same dataset source as that used for enrollment and testing. Possible solutions to enhance overall performance and cope with adversarial (impostor) behavior during mass screening include non-inductive learning settings, e.g., transduction and transfer learning, using both labeled and unlabeled examples.