Skip to Main Content
Most work on multi-biometric fusion is based on static fusion rules which cannot respond to the changes of the environment and the individual users. This paper proposes adaptive multi-biometric fusion, which dynamically adjusts the fusion rules to suit the real-time external conditions. As a typical example, the adaptive fusion of gait and face in video is studied. Two factors that may affect the relationship between gait and face in the fusion are considered, i.e., the view angle and the subject-to-camera distance. Together they determine the way gait and face are fused at an arbitrary time. Experimental results show that the adaptive fusion performs significantly better than not only single biometric traits, but also those widely adopted static fusion rules including SUM, PRODUCT, MIN, and MAX.