Skip to Main Content
This paper addresses the selection of best lip motion features for biometric open-set speaker identification. The best features are those that result in the highest discrimination of individual speakers in a population. We first detect the face region in each video frame. The lip region for each frame is then segmented following the registration of successive face regions by global motion compensation. The initial lip feature vector is composed of the 2D-DCT coefficients of the optical flow vectors within the lip region at each frame. We propose to select the most discriminative features from the full set of transform coefficients by using a probabilistic measure that maximizes the ratio of intra-class and inter-class probabilities. The resulting discriminative feature vector with reduced dimension is expected to maximize the identification performance. Experimental results are also included to demonstrate the performance.