Skip to Main Content
The paper presents a hybrid continuous-speech recognition system that leads to improved results on the speaker dependent DARPA Resource Management task. This hybrid system, called the combined system, is based on a combination of normalized neural network output scores with hidden Markov model (HMM) emission probabilities. The neural network is trained under mean square error and the HMM is trained under maximum likelihood estimation. In theory, whatever criterion may be used, the same word error rate should be reached if enough training data is available. As this is never the case, the idea of combining two different criteria, each of them extracting complementary characteristics of the feature is interesting. A state-of-the-art HMM system will be combined with a time delay neural network (TDNN) integrated in a Viterbi framework. A hierarchical TDNN structure is described that splits training into subtasks corresponding to subsets of phonemes. This structure makes training of TDNNs on large-vocabulary tasks manageable on workstations. It will be shown that the combined system, despite the low accuracy of the hierarchical TDNN, achieves a word error rate reduction of 15% with respect to our state-of-the-art HMM system. This reduction is obtained with a 10% increase only in the number of parameters.