Abstract:
This paper investigates semi-supervised methods for discriminative language modeling, whereby n-best lists are “hallucinated” for given reference text and are then used f...Show MoreMetadata
Abstract:
This paper investigates semi-supervised methods for discriminative language modeling, whereby n-best lists are “hallucinated” for given reference text and are then used for training n-gram language models using the perceptron algorithm. We perform controlled experiments on a very strong baseline English CTS system, comparing three methods for simulating ASR output, and compare the results with training with “real” n-best list output from the baseline recognizer. We find that methods based on extracting phrasal cohorts - similar to methods from machine translation for extracting phrase tables - yielded the largest gains of our three methods, achieving over half of the WER reduction of the fully supervised methods.
Published in: 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 25-30 March 2012
Date Added to IEEE Xplore: 30 August 2012
ISBN Information: