By Topic

Modeling multimodal human-computer interaction

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Obrenovic, Z. ; Belgrade Univ., Serbia ; Starcevic, D.

Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze in an effective flow of communication. Recent initiatives such as perceptual and attentive user interfaces put these natural human behaviors in the center of the human-computer interaction (HCI). We've designed a generic modeling framework for specifying multimodal HCI using the Object Management Group's Unified Modeling Language. Because it's a well-known and widely supported standard - computer science departments typically cover it in undergraduate courses, and many books, training courses, and tools support it - UML makes it easier for software engineers unfamiliar with multimodal research to apply HCI knowledge, resulting in broader and more practical effects. Standardization provides a significant driving force for further progress because it codifies best practices, enables and encourages reuse, and facilitates interworking between complementary tools.

Published in:

Computer  (Volume:37 ,  Issue: 9 )