By Topic

From face features analysis to automatic lip reading

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
P. Delmas ; Dept. of Comput. Sci., Auckland Univ., New Zealand ; M. Lievin

An unsupervised framework for face analysis aiming at lip tracking is presented in this paper. A colour video sequence of a speaker's face is simply acquired by a desktop camera under natural lighting conditions and without any particular make-up. After a logarithmic colour transform, a statistical segmentation process regularizes motion and hue information within a spatio-temporal neighbourhood. The hierarchical segmentation labels the different areas of the face. Results are then used to define a region of interest for each feature in the face, particularly the lip contours. Lip corners and associated characteristic points are extracted to initialise an active contours stage. Finally, a speaker's lip shape with inner and outer borders is tracked without user tuning: This unsupervised framework provides geometrical features of the face when no specific model of the speaker face is assumed.

Published in:

Control, Automation, Robotics and Vision, 2002. ICARCV 2002. 7th International Conference on  (Volume:3 )

Date of Conference:

2-5 Dec. 2002