By Topic

Semi-Automatically Generated High-Level Fusion for Multimodal User Interfaces

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Ertl, D. ; Inst. of Comput. Technol., Vienna Univ. of Technol., Vienna, Austria ; Kavaldjian, S. ; Kaindl, H. ; Falb, J.

Reliable high-level fusion of several input modalities is hard to achieve, and(semi-)automatically generating it is even more difficult. However, it is important to address in order to broaden the scope of providing user interfaces semi-automatically.Our approach starts from a high-level discourse model created by a human interaction designer. It is modality independent, so an annotated discourse is semiautomatically generated, which influences the fusion mechanism. Our high-level fusion checks hypotheses from the various input modalities by use of finite state machines. These are modality independent, and they are automatically generated from the given discourse model. Taking all this together, our approach provides semi-automatic generation of high-level fusion. It currently supports input modalities graphical user interface (simple) speech, a few hand gestures, and a bar code reader.

Published in:

System Sciences (HICSS), 2010 43rd Hawaii International Conference on

Date of Conference:

5-8 Jan. 2010