Skip to Main Content
Multimodality is a fact of human communication, increasingly our ways to communicate change and humans do interact with machines (be it a mundane ATM transaction, the calling of an automised call center, or the setting/disarming of a residential alarm system). However, these interactions are mostly limited to single input and output schemes, thus loosing a lot of additional information a human communication partner would sense, Multimodality was perceived to exactly tackle this point. This paper describes a framework and approach to operate multimodal interaction mechanisms in both the fixed as well as the mobile environments. The paper describes a scheme that facilitates the dynamic binding and release of user-interface devices (such as screens, keyboards, etc.) to support multimodal interactions in mobile environments and to enable the user to 'make use' of any possible user interface device available (and allowed), thus supporting the individuals changing communication environment. The principles and basic functionality of an adaptive multimodal human interface-device binding engine are outlined.