By Topic

Reduced complexity tone classifier for automatic tonal speech recognizer

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
Chaiwongsai, J. ; Dept. of Electron. & Telecommun. Eng., King Mongkut''s Univ. of Technol. Thonburi, Bangkok, Thailand ; Chiracharit, W. ; Chamnongthai, K. ; Miyanaga, Y.
more authors

A tone classifier is an essential part of an automatic tonal speech recognizer (ATSR) because tonal languages recognize word meaning by tones. However, many researchers have developed a highly efficient tone recognition by using rich mathematical techniques and used the whole input speech as an input of pitch detection process. This paper proposes a reduced complexity tone classifier for the automatic tonal speech recognizer. The classifier reduces the number of input frames by detecting only the vowel signals as an input of the pitch detection, called vowel-AMDF (V-AMDF). The classifier uses a lower number of floating-point operations (FLOPs) than used in the whole input speech method. Due to the reduced number of FLOPs, this tone classifier can be suitable for portable electronic equipment. In addition, V-AMDF reduces F0 contour errors caused by the influence from neighboring syllables. This proposed classifier was tested and set by 19 Thai words, selected from voice activation for GPS system and phone dialing options. The experimental results show 86.0% recognition accuracy, and 21.8% reduction in the number of FLOPs, compared with using the whole input speech.

Published in:

Communications and Information Technologies (ISCIT), 2012 International Symposium on

Date of Conference:

2-5 Oct. 2012