Abstract:
Language models are used extensively in state-of-the-art speech recognition systems to help determine the probability of a hypothesized word sequence. These probabilities...Show MoreMetadata
Abstract:
Language models are used extensively in state-of-the-art speech recognition systems to help determine the probability of a hypothesized word sequence. These probabilities, along with the acoustic model scores, allow the system to constrain the search space during recognition to only those word sequences that have a reasonable chance of being correct. In order to determine these probabilities, knowledge of the entire problem space is necessary. However, in speech recognition, this is an unreasonable if not impossible task, especially when one is using the SWITCHBOARD corpus (a large corpus consisting of over 240 hours of recorded telephone conversations totaling almost 3 million words of text). Many statistical and rule-based approaches have been applied to this problem in order to arrive at a language model that produces the minimal word error rate (WER) of the recognizer. One technique includes part-of-speech (POS) information in the language model. This paper discusses the task of tagging the SWITCHBOARD corpus with POS information in the usual manner, and the problems encountered when trying to conform conversational speech to these tags.
Published in: Proceedings 1999 International Conference on Information Intelligence and Systems (Cat. No.PR00446)
Date of Conference: 31 October 1999 - 03 November 1999
Date Added to IEEE Xplore: 06 August 2002
Print ISBN:0-7695-0446-9