Skip to Main Content
The fusion of information from heterogenous sensors is crucial to the effectiveness of a multimodal system. Noise affect the sensors of different modalities independently. A good fusion scheme should be able to use local estimates of the reliability of each modality to weight the decisions. This paper presents an iterative decoding based information fusion scheme motivated by the theory of turbo codes. This fusion framework is developed in the context of hidden Markov models. We present the mathematical framework of the fusion scheme. We then apply this algorithm to an audio-visual speech recognition task on the GRID audio-visual speech corpus and present the results.