By Topic

Auditory models with Kohonen SOFM and LVQ for speaker independent phoneme recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Anderson, T.R. ; Bioacoustic & Biocommunications Branch, Armstrong Lab., Wright-Patterson AFB, OH, USA

Neural networks that employed unsupervised learning were used on the output of two different models of the auditory periphery to perform phoneme recognition. Experiments which compared the performance of these two auditory model representations to mel-cepstral coefficients showed that the auditory models performed significantly better in terms of phoneme recognition accuracy under the conditions tested (high signal-to-noise and a large database of speakers). However, the three representations made different types of broad class recognition errors. The Patterson auditory model representation performed best with the highest overall phoneme and broad class performance

Published in:

Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on  (Volume:7 )

Date of Conference:

27 Jun-2 Jul 1994