Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Acoustic to articulatory parameter mapping using an assembly of neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Rahim, M.G. ; AT&T Bell Labs., Murray Hill, NJ, USA ; Keijn, W.B. ; Schroeter, J. ; Goodyear, C.C.

The authors describe an efficient procedure for acoustic-to-articulatory parameter mapping using neural networks. An assembly of multilayer perceptrons, each designated to a specific region in the articulatory space, is used to map acoustic parameters of the speech into tract areas. The training of this model is executed in two stages; in the first stage a codebook of suitably normalized articulatory parameters is used and in the second stage real speech data are used to further improve the mapping. In general, acoustic-to-articulatory parameter mapping is nonunique; several vocal tract shapes can result in identical spectral envelopes. The model accommodates this ambiguity. During synthesis, neural networks are selected by dynamic programming using a criterion that ensures smoothly varying vocal tract shapes while maintaining a good spectral match

Published in:

Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Conference on

Date of Conference:

14-17 Apr 1991