By Topic

Neural network vector quantizer design using sequential and parallel learning techniques

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
F. H. Wu ; US West Adv. Technol. Inc., Englewood, CO, USA ; K. K. Parhi ; K. Ganesan

Many techniques for quantizing large sets of input vectors into much smaller sets of output vectors have been developed. Various neural network based techniques for generating the input vectors via system training are studied. The variations are centered around a neural net vector quantization (NNVQ) method which combines the well-known conventional Linde, Buzo and Gray (1980) (LBG) technique and the neural net based Kohonen (1984) technique. Sequential and parallel learning techniques for designing efficient NNVQs are given. The schemes presented require less computation time due to a new modified gain formula, partial/zero neighbor updating, and parallel learning of the code vectors. Using Gaussian-Markov source and speech signal benchmarks, it is shown that these new approaches lead to distortion as good as or better than that obtained using the LBG and Kohonen approaches

Published in:

Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Conference on

Date of Conference:

14-17 Apr 1991