Skip to Main Content
We explore the use of sequential, mistake-driven updates for online learning and acoustic feature adaptation in large-margin hidden Markov models (HMMs). The updates are applied to the parameters of acoustic models after the decoding of individual training utterances. For large-margin training, the updates attempt to separate the log-likelihoods of correct and incorrect transcriptions by an amount proportional to their Hamming distance. For acoustic feature adaptation, the updates attempt to improve recognition by linearly transforming the features computed by the front end. We evaluate acoustic models trained in this way on the TIMIT speech database. We find that online updates for large-margin training not only converge faster than analogous batch optimizations, but also yield lower phone error rates than approaches that do not attempt to enforce a large margin. Finally, experimenting with different schemes for initialization and parameter-tying, we find that acoustic feature adaptation leads to further improvements beyond the already significant gains achieved by large-margin training.