By Topic

VTLN Using Analytically Determined Linear-Transformation on Conventional MFCC

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Sanand, D.R. ; Norwegian Univ. of Sci. & Technol., Trondheim, Norway ; Umesh, S.

In this paper, we propose a method to analytically obtain a linear-transformation on the conventional Mel frequency cepstral coefficients (MFCC) features that corresponds to conventional vocal tract length normalization (VTLN)-warped MFCC features, thereby simplifying the VTLN processing. There have been many attempts to obtain such a linear-transformation, but all the previously proposed approaches either modify the signal processing (and therefore not conventional MFCC), or the linear-transformation does not correspond to conventional VTLN-warping, or the matrices being estimated and are data dependent. In short, the conventional VTLN part of an automatic speech recognition (ASR) system cannot be simply replaced with any of the previously proposed methods. Umesh proposed the idea to use band-limited interpolation for performing VTLN-warping on MFCC using plain cepstra. Motivated from this work, Panchapagesan and Alwan proposed a linear-transformation to perform VTLN-warping on conventional MFCC. However, in their approach, VTLN warping is specified in the Mel-frequency domain and is not equivalent to conventional VTLN. In this paper, we present an approach which also draws inspiration from the work of Umesh , and which we believe for the first time performs conventional VTLN as a linear-transformation on conventional MFCC using the ideas of band-limited interpolation. Deriving such a linear-transformation to perform VTLN, would allow us to use the VTLN-matrices in transform-based adaptation framework with its associated advantages and yet would require the estimation of a single parameter. Using four different tasks, we show that our proposed approach has almost identical recognition performance to conventional VTLN on both clean and noisy speech data.

Published in:

Audio, Speech, and Language Processing, IEEE Transactions on  (Volume:20 ,  Issue: 5 )