Skip to Main Content
Reliable high-level fusion of several input modalities is hard to achieve, and(semi-)automatically generating it is even more difficult. However, it is important to address in order to broaden the scope of providing user interfaces semi-automatically.Our approach starts from a high-level discourse model created by a human interaction designer. It is modality independent, so an annotated discourse is semiautomatically generated, which influences the fusion mechanism. Our high-level fusion checks hypotheses from the various input modalities by use of finite state machines. These are modality independent, and they are automatically generated from the given discourse model. Taking all this together, our approach provides semi-automatic generation of high-level fusion. It currently supports input modalities graphical user interface (simple) speech, a few hand gestures, and a bar code reader.