By Topic

Upper bounds on empirically optimal quantizers

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Kim, Dong Sik ; Sch. of Electron. & Inf. Eng., Hankuk Univ. of Foreign Studies, Kyonggi-do, South Korea ; Bell, M.R.

In designing a vector quantizer using a training sequence (TS), the training algorithm tries to find an empirically optimal quantizer that minimizes the selected distortion criteria using the sequence. In order to evaluate the performance of the trained quantizer, we can use the empirically minimized distortion that we obtain when designing the quantizer. Several upper bounds on the empirically minimized distortions are proposed with numerical results. The bound holds pointwise, i.e., for each distribution with finite second moment in a class. From the pointwise bounds, it is possible to derive the worst case bound, which is better than the current bounds for practical training ratio β, the ratio of the TS size to the codebook size. It is shown that the empirically minimized distortion underestimates the true minimum distortion by more than a factor of (1-1/m), where m is the sequence size. Furthermore, through an asymptotic analysis in the codebook size, a multiplication factor [1-(1-e)/β]≈(1-1/β) for an asymptotic bound is shown. Several asymptotic bounds in terms of the vector dimension and the type of source are also introduced.

Published in:

Information Theory, IEEE Transactions on  (Volume:49 ,  Issue: 4 )