By Topic

Verification Decoding of High-Rate LDPC Codes With Applications in Compressed Sensing

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Fan Zhang ; Dept. of Electr. & Comput. Eng., Texas A&M Univ., College Station, TX, USA ; Pfister, H.D.

This paper considers the performance of (j, k)-regular low-density parity-check (LDPC) codes with message-passing (MP) decoding algorithms in the high-rate regime. In particular, we derive the high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the q-ary symmetric channel (q-SC). For the BEC and a fixed j, the density evolution (DE) threshold of iterative decoding scales like Θ(k-1) and the critical stopping ratio scales like Θ(k-j/(j-2)). For the q-SC and a fixed j, the DE threshold of verification decoding depends on the details of the decoder and scales like Θ(k-1) for one decoder. Using the fact that coding over large finite alphabets is very similar to coding over the real numbers, the analysis of verification decoding is also extended to the compressed sensing (CS) of strictly sparse signals. A DE-based approach is used to analyze the CS systems with randomized-reconstruction guarantees. This leads to the result that strictly sparse signals can be reconstructed efficiently with high probability using a constant oversampling ratio (i.e., when the number of measurements scales linearly with the sparsity of the signal). A stopping-set-based approach is also used to get stronger (e.g., uniform-in-probability) reconstruction guarantees.

Published in:

Information Theory, IEEE Transactions on  (Volume:58 ,  Issue: 8 )