By Topic

Online Learning and Acoustic Feature Adaptation in Large-Margin Hidden Markov Models

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Chih-Chieh Cheng ; Dept. of Comput. Sci. & Eng., Univ. of California, La Jolla, CA, USA ; Fei Sha ; Saul, L.K.

We explore the use of sequential, mistake-driven updates for online learning and acoustic feature adaptation in large-margin hidden Markov models (HMMs). The updates are applied to the parameters of acoustic models after the decoding of individual training utterances. For large-margin training, the updates attempt to separate the log-likelihoods of correct and incorrect transcriptions by an amount proportional to their Hamming distance. For acoustic feature adaptation, the updates attempt to improve recognition by linearly transforming the features computed by the front end. We evaluate acoustic models trained in this way on the TIMIT speech database. We find that online updates for large-margin training not only converge faster than analogous batch optimizations, but also yield lower phone error rates than approaches that do not attempt to enforce a large margin. Finally, experimenting with different schemes for initialization and parameter-tying, we find that acoustic feature adaptation leads to further improvements beyond the already significant gains achieved by large-margin training.

Published in:

Selected Topics in Signal Processing, IEEE Journal of  (Volume:4 ,  Issue: 6 )