Skip to Main Content
In this paper, we propose a dynamic in-search discriminative training approach of a large-scale HMM model for large vocabulary speech recognition. A previously proposed data selection method is used to choose competing hypotheses dynamically during Viterbi beam search procedure. Particularly, all active word-ending paths are examined during search with reference transcription to identify competing tokens for different HMM's. Then HMMs are re-estimated based on an GPD-based discriminative training to minimize total number of possible error tokens among all collected competing tokens. In this way, recognition errors, e.g., word error rate, in training data can be reduced indirectly. The proposed approach is flexible enough to run in a batch or incremental mode. Also, the method can efficiently be implemented to process large amount of training data and update a large-scale state-tied HMM: set for large vocabulary recognition tasks. Some preliminary results on DARPA communicator task show the new discriminative training method can improve recognition performance over our best ML-trained system.