By Topic

Is rateless paradigm fitted for lossless compression of erasure-impaired sources?

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)

Even though in many network scenarios erasure-impaired sources represent a more appropriate data model than frequently used Gaussian and binary sources, they only recently entered the scene of compression coding through introduction of binary erasure quantization over sparse graphs. Binary erasure quantization (BEQ) considers ternary sources (zeros, ones and erasures), and binary reconstructions where Hamming distortion is defined for non-erasure source symbols, and distortion is zero for any binary reconstruction of erasure symbols. We believe that constructive schemes for binary erasure quantization deserve more attention. We focus on the rate-optimal zero-distortion BEQ schemes, resulting in the complete recovery of both the unerased bits and the (positions of) erasures in the source sequence. We analyze suitability of rateless codes for this form of BEQ, in terms of compression and decompression complexity, and the rate-gap with respect to the theoretically optimal rate. We demonstrate how duals of properly designed fountain codes can be used for the erasure-rate adaptive lossless compression if the compression rate gap is traded off for complexity of encoding and decoding. We also show that there might exist a better trade-off if the dual code is doped with additional fake erasures, but the results are not conclusive. Finally, an important contribution is that we recognized and explained an idiosyncrasy of fountain codes when it comes to the construction of dual codes employed in quantization; namely, the common starting point is the iterative erasure decoding with LDPC codes, which is based on the parity check matrix, meaning that the same graph can be used for the quantization, as it represents the generator matrix of the dual code; this is not the case with LT codes whose decoder is based on the same graph as its encoder. Hence, the dualization needs to take a completely different track, as explained here.

Published in:

Communication, Control, and Computing (Allerton), 2011 49th Annual Allerton Conference on

Date of Conference:

28-30 Sept. 2011