By Topic

Selective use of multiple entropy models in audio coding

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Mehrotra, S. ; Microsoft Corp., Redmond, WA ; Wei-ge Chen

The use of multiple entropy models for Huffman or arithmetic coding is widely used to improve the compression efficiency of many algorithms when the source probability distribution varies. However, the use of multiple entropy models increases the memory requirements of both the encoder and decoder significantly. In this paper, we present an algorithm which maintains almost all of the compression gains of multiple entropy models for only a very small increase in memory over one which uses a single entropy model. This can be used for any entropy coding scheme such as Huffman or arithmetic coding. This is accomplished by employing multiple entropy models only for the most probable symbols and using fewer entropy models for the less probable symbols. We show that this algorithm reduces the audio coding bitrate by 5%-8% over an existing algorithm which uses the same amount of table memory by allowing effective switching of the entropy model being used as source statistics change over an audio transform block.

Published in:

Multimedia Signal Processing, 2008 IEEE 10th Workshop on

Date of Conference:

8-10 Oct. 2008