By Topic

Designing a robust speech and gaze multimodal system for diverse users

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Qiaohui Zhang ; Dept. of Comput. & Media Eng., Yamanashi Univ., Kofu, Japan ; Go, K. ; Imamiya, A. ; Xiaoyang Mao

The recognition errors make recognition-based systems brittle, and lead to usability problems. Multimodal system is generally believed as an effective means of being able to contribute to error avoidance and recovery. This work explores how to combine gaze and speech, which are two error-prone modes, in order to get a robust multimodal architecture. Combining the two overcomes imperfections of recognition techniques, compensates for drawbacks of a single mode, resolves the language ambiguity, and leads to a much more effective system. In addition, we try to employ a new performance criterion about the error-handling ability to analyze and assess the multimodal integration strategies. With this new measure approach, not only the benefits of mutual disambiguation of individual input signals within the multimodal architecture are demonstrated, but also the condition under which the multimodal system becomes the most effective is identified.

Published in:

Information Reuse and Integration, 2003. IRI 2003. IEEE International Conference on

Date of Conference:

27-29 Oct. 2003