Skip to Main Content
Face recognition performance depends upon the input variability as encountered during biometric data capture including occlusion and disguise. The challenge met in this paper is to expand the scope and utility of biometrics by discarding unwarranted assumptions regarding the completeness and quality of the data captured. Towards that end we propose a model-free and non-parametric component-based face recognition strategy with robust decisions for data fusion that are driven by transduction and boosting. The conceptual framework draws support throughout from discriminative methods using likelihood ratios. It links at the conceptual level forensics and biometrics, while at the implementation level it links the Bayesian framework and statistical learning theory (SLT). Feature selection of local patch instances and their corresponding high-order combinations, exemplar-based clustering (of patches) as components including the sharing (of exemplars) among components, and finally decision-making regarding authentication using boosting driven by components that play the role of weak-learners, are implemented in a similar fashion using transduction driven by a strangeness measure akin to typicality. The feasibility, reliability, and utility of the proposed open set face recognition architecture vis-a-vis adverse image capture conditions are illustrated using FRGC data. The potential for future developments concludes the paper.