By Topic

Sequential and parallel neural network vector quantizers

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Parhi, K.K. ; US West Adv. Technol. Inc., Boulder, CO, USA ; Wu, F.H. ; Genesan, K.

Presents novel sequential and parallel learning techniques for codebook design in vector quantizers using neural network approaches. These techniques are used in the training phase of the vector quantizer design. These learning techniques combine the split-and-cluster methodology of the traditional vector quantizer design with neural learning, and lead to better quantizer design (with fewer distortions). The sequential learning approach overcomes the code word underutilization problem of the competitive learning network. As a result, this network only requires partial or zero updating, as opposed to full neighbor updating as needed in the self organizing feature map. The parallel learning network, while satisfying the above characteristics, also leads to parallel learning of the codewords. The parallel learning technique can be used for faster codebook design in a multiprocessor environment. It is shown that this sequential learning scheme can sometimes outperform the traditional LBG algorithm, while the parallel learning scheme performs very close to the LGB and the sequential learning algorithms

Published in:

Computers, IEEE Transactions on  (Volume:43 ,  Issue: 1 )