Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

An analysis of speakers' gaze behavior for automatic addressee identification in multiparty conversation and its application to video editing

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Takemae, Y. ; NTT Commun. Sci. Lab., NTT Corp., Kanagawa, Japan ; Otsuka, K. ; Mukawa, N.

This work tackles the issue of the speaker-addressee links in face-to-face multiparty conversation. Systems that archive meetings and those that support teleconferences are attracting considerable interest. Conventional systems use a fixed-viewpoint camera and simple camera selection based on the participants' utterances etc. Unfortunately, they fail to adequately convey who is talking to whom. To solve this problem, we must automatically detect the addressee or addressees and develop video editing rules that can clearly convey who is talking to whom. In this paper, to detect the addressee, we statistically analyze the speakers' gaze behavior for (a) one-addressee utterances and (b) multi-addressee utterances. Experiments verify that speakers' gaze behavior is 89% accurate in classifying addressee type, using the discrimination function obtained by discriminant analysis. Finally, we present three new video editing rules based on utterance type, and indicate the possibility of more successfully conveying who is talking to whom.

Published in:

Robot and Human Interactive Communication, 2004. ROMAN 2004. 13th IEEE International Workshop on

Date of Conference:

20-22 Sept. 2004