By Topic

Automatic extraction of semantic information for a context sensitive multimodal framework for VR

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
G. Conti ; Graphitech, Villazzano, Italy ; G. Ucelli ; R. De Amicis

The capability of processing spoken commands is one of the most important features of modern multimodal AR/VR environments. This feature requires programmers to compile some human supplied knowledge in the form of grammars which are used at runtime to process spoken utterances into complete commands. Further speech recognition (SR) must be hard-coded into the application. This time-consuming, error-prone process is repeated every time modifications to the code are introduced. This paper presents a completely automatic process to build a body of knowledge from the information embedded within the application source code. The programmer in fact often embeds, throughout the coding process, a vast amount of semantic information by defining classes, reference names, or through method definitions. This research work exploits this semantic richness and it provides a self-configurable system, which automatically adapts its understanding of human commands according to the semantic information within the application's source code

Published in:

2005 International Conference on Cyberworlds (CW'05)

Date of Conference:

23-25 Nov. 2005