Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Application of wavelet transforms for C/V segmentation on Mandarin speech signals

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $31
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Chen, S.H. ; Dept. of Electr. Eng., Nat. Cheng Kung Univ., Tainan, Taiwan ; Wang, J.F.

It has been demonstrated that wavelet transforms can be developed to find the C/V segmentation point of a Mandarin speech signal. The basic idea is the utilisation of a specific function, the product function, for indicating the C/V segmentation point. Based on the wavelet transforms, the product function is generated from the appropriate approximation signal and detail signal of the input speech, and its energy profile contains the evidence for detecting the C/V segmentation point. It is shown that the C/V segmentation point can be obtained directly using of the product function and its energy profile. The main advantage of the proposed scheme is the capability of forward and directly searching for the C/V segmentation point, and there is no need to set any predetermined threshold. Thus, the pitch detector and backward-processing required in the conventional C/V segmentation algorithm are completely avoided. The analysis of the proposed algorithm on various types of Mandarin speech indicates considerable improvement over the conventional method. Experiments show that the overall accuracy rate of the proposed method reaches 95.4%

Published in:

Vision, Image and Signal Processing, IEE Proceedings -  (Volume:148 ,  Issue: 2 )