Skip to Main Content
Even though in many network scenarios erasure-impaired sources represent a more appropriate data model than frequently used Gaussian and binary sources, they only recently entered the scene of compression coding through introduction of binary erasure quantization over sparse graphs. Binary erasure quantization (BEQ) considers ternary sources (zeros, ones and erasures), and binary reconstructions where Hamming distortion is defined for non-erasure source symbols, and distortion is zero for any binary reconstruction of erasure symbols. We believe that constructive schemes for binary erasure quantization deserve more attention. We focus on the rate-optimal zero-distortion BEQ schemes, resulting in the complete recovery of both the unerased bits and the (positions of) erasures in the source sequence. We analyze suitability of rateless codes for this form of BEQ, in terms of compression and decompression complexity, and the rate-gap with respect to the theoretically optimal rate. We demonstrate how duals of properly designed fountain codes can be used for the erasure-rate adaptive lossless compression if the compression rate gap is traded off for complexity of encoding and decoding. We also show that there might exist a better trade-off if the dual code is doped with additional fake erasures, but the results are not conclusive. Finally, an important contribution is that we recognized and explained an idiosyncrasy of fountain codes when it comes to the construction of dual codes employed in quantization; namely, the common starting point is the iterative erasure decoding with LDPC codes, which is based on the parity check matrix, meaning that the same graph can be used for the quantization, as it represents the generator matrix of the dual code; this is not the case with LT codes whose decoder is based on the same graph as its encoder. Hence, the dualization needs to take a completely different track, as explained here.
Date of Conference: 28-30 Sept. 2011