Skip to Main Content
This work proposes a way to integrate an information retrieval (IR) system with an automatic speech recognition (ASR) engine to support natural spoken queries. A broader interaction between the two modules is achieved by transmitting a lattice of terms to the IR system. This is in contrast with conventional systems where only the best-path recognition output is transmitted. Acoustic scores associated with the term-lattice are used to weigh the terms. A latent semantic indexing (LSI) scheme in which documents and terms are mapped to a single reduced feature-space with 400 semantic components is used. The conventional LSI method is nevertheless modified to allow the aforementioned broader interaction between acoustic hypothesis and semantic determination. The results show that the proposed method moderately outperforms the traditional approach for spoken queries formulated as casual phrases.