By Topic

Optimization Vector Quantization by Adaptive Associative-Memory-Based Codebook Learning in Combination with Huffman Coding

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Kawabata, A. ; Res. Inst. for Nanodevice & Bio Syst., Hiroshima Univ., Hiroshima, Japan ; Koide, T. ; Mattausch, H.J.

In the presented research on codebook optimization for vector quantization, an associative memory architecture is applied, which searches the most similar data among previously stored reference data. For realizing the learning function of new codebook data, a learning algorithm is implemented, which is based on this associative memory and which imitates the concept of the human short/long-term memory. The quality improvement of the codebook for vector quantization, created with the proposed learning algorithm, and the learning-parameter dependence of the improvement is evaluated with the Peak Signal Noise Ratio (PSNR), which is an index of the image quality. A quantitative PSNR improvement of 2.5 - 3.0 dB could be verified. Since the learning algorithm orders the codebook elements according to their usage frequency for the vector-quantization process, Huffman coding is additionally applied, and is verified to further improve the compression ratio from 12.8 to 14.1.

Published in:

Networking and Computing (ICNC), 2010 First International Conference on

Date of Conference:

17-19 Nov. 2010