By Topic

Integrated person identification using voice and facial features

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $31
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Chibelushi, C.C. ; Dept. of Electr. & Electron. Eng., Wales Univ., Swansea, UK ; Mason, J.S.D. ; Deravi, F.

Real-world automatic person recognition requires a consistently high recognition accuracy which is difficult to attain using a single recognition modality. This paper addresses the issue of person identification accuracy resulting from the combination of voice and outer lip-margin features. An assessment of feature fusion-based on audio-visual feature vector concatenation, principal component analysis, and linear discriminant analysis-is conducted. The paper shows that outer lip margins carry speaker identity cues. It is also shown that the joint use of voice and lip-margin features is equivalent to an effective increase in signal-to-noise ratio of the audio signal. Simple audio-visual feature vector concatenation is shown to be an effective method for feature combination, and linear discriminant analysis is shown to possess the capability of packing discriminating audio-visual information into fewer coefficients than principal component analysis

Published in:

Image Processing for Security Applications (Digest No.: 1997/074), IEE Colloquium on

Date of Conference:

10 Mar 1997