By Topic

Continuous Stochastic Feature Mapping Based on Trajectory HMMs

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Heiga Zen ; Cambridge Research Laboratory, Toshiba Research Europe Ltd., Cambridge, UK ; Yoshihiko Nankaku ; Keiichi Tokuda

This paper proposes a technique of continuous stochastic feature mapping based on trajectory hidden Markov models (HMMs), which have been derived from HMMs by imposing explicit relationships between static and dynamic features. Although Gaussian mixture model (GMM)- or HMM-based feature-mapping techniques work effectively, their accuracy occasionally degrades due to inappropriate dynamic characteristics caused by frame-by-frame mapping. While the use of dynamic-feature constraints at the mapping stage can alleviate this problem, it also introduces inconsistencies between training and mapping. The technique we propose can eliminate these inconsistencies while retaining the benefits of using dynamic-feature constraints, and it offers entire sequence-level transformation rather than frame-by-frame mapping. The results obtained from speaker-conversion, acoustic-to-articulatory inversion-mapping, and noise-compensation experiments demonstrated that our new approach outperformed the conventional one.

Published in:

IEEE Transactions on Audio, Speech, and Language Processing  (Volume:19 ,  Issue: 2 )