By Topic

Soft margin estimation of Gaussian mixture model parameters for spoken language recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Donglai Zhu ; Inst. for Infocomm Res., Singapore, Singapore ; Bin Ma ; Haizhou Li

This paper extends our previous work on large margin estimation (LME) of GMM parameters with extend Baum-Welch (EBW) for spoken language recognition. To overcome the problem in the LME that negative samples in the training set are not used in parameter estimation, we propose a soft margin estimation (SME) method in this paper. The soft margin is scaled by a loss function measuring the distance between a negative sample and the classification boundary. We formulate the constrained optimization of SME as an unconstrained optimization among both positive samples and negative samples using a penalty function, and update the GMM parameters with the EBW algorithm. Experiments on the NIST language recognition evaluation (LRE) 2007 task show that the SME method effectively improves the LME performance.

Published in:

Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on

Date of Conference:

14-19 March 2010