By Topic

Information Theory, IEEE Transactions on

Issue 2 • Date Feb 2001

Filter Results

Displaying Results 1 - 25 of 26
  • Introduction to the special issue on codes on graphs and iterative algorithms

    Page(s): 493 - 497
    Save to Project icon | Request Permissions | PDF file iconPDF (51 KB)  
    Freely Available from IEEE
  • Contributors

    Page(s): 850 - 853
    Save to Project icon | Request Permissions | PDF file iconPDF (39 KB)  
    Freely Available from IEEE
  • Factor graphs and the sum-product algorithm

    Page(s): 498 - 519
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB) |  | HTML iconHTML  

    Algorithms that must deal with complicated global functions of many variables often exploit the manner in which the given functions factor as a product of “local” functions, each of which depends on a subset of the variables. Such a factorization can be visualized with a bipartite graph that we call a factor graph, In this tutorial paper, we present a generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph. Following a single, simple computational rule, the sum-product algorithm computes-either exactly or approximately-various marginal functions derived from the global function. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sum-product algorithm, including the forward/backward algorithm, the Viterbi algorithm, the iterative “turbo” decoding algorithm, Pearl's (1988) belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform (FFT) algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Codes on graphs: normal realizations

    Page(s): 520 - 548
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    A generalized state realization of the Wiberg (1996) type is called normal if symbol variables have degree 1 and state variables have degree 2. A natural graphical model of such a realization has leaf edges representing symbols, ordinary edges representing states, and vertices representing local constraints. Such a graph can be decoded by any version of the sum-product algorithm. Any state realization of a code can be put into normal form without essential change in the corresponding graph or in its decoding complexity. Group or linear codes are generated by group or linear state realizations. On a cycle-free graph, there exists a well-defined minimal canonical realization, and the sum-product algorithm is exact. However, the cut-set bound shows that graphs with cycles may have a superior performance-complexity tradeoff, although the sum-product algorithm is then inexact and iterative, and minimal realizations are not well-defined. Efficient cyclic and cycle-free realizations of Reed-Muller (RM) codes are given as examples. The dual of a normal group realization, appropriately defined, generates the dual group code. The dual realization has the same graph topology as the primal realization, replaces symbol and state variables by their character groups, and replaces primal local constraints by their duals. This fundamental result has many applications, including to dual state spaces, dual minimal trellises, duals to Tanner (1981) graphs, dual input/output (I/O) systems, and dual kernel and image representations. Finally a group code may be decoded using the dual graph, with appropriate Fourier transforms of the inputs and outputs; this can simplify decoding of high-rate codes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved low-density parity-check codes using irregular graphs

    Page(s): 585 - 598
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB) |  | HTML iconHTML  

    We construct new families of error-correcting codes based on Gallager's (1973) low-density parity-check codes. We improve on Gallager's results by introducing irregular parity-check matrices and a new rigorous analysis of hard-decision decoding of these codes. We also provide efficient methods for finding good irregular structures for such decoding algorithms. Our rigorous analysis based on martingales, our methodology for constructing good irregular codes, and the demonstration that irregular structure improves performance constitute key points of our contribution. We also consider irregular codes under belief propagation. We report the results of experiments testing the efficacy of irregular codes on both binary-symmetric and Gaussian channels. For example, using belief propagation, for rate 1/4 codes on 16000 bits over a binary-symmetric channel, previous low-density parity-check codes can correct up to approximately 16% errors, while our codes correct over 17%. In some cases our results come very close to reported results for turbo codes, suggesting that variations of irregular low density parity-check codes may be able to match or beat turbo code performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The capacity of low-density parity-check codes under message-passing decoding

    Page(s): 599 - 618
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (480 KB) |  | HTML iconHTML  

    We present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Signal-space characterization of iterative decoding

    Page(s): 766 - 781
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB)  

    By tracing the flow of computations in the iterative decoders for low-density parity-check codes, we formulate a signal-space view for a finite number of iterations in a finite-length code. On a Gaussian channel, maximum a posteriori (MAP) codeword decoding (or “maximum-likelihood decoding”) decodes to the codeword signal that is closest to the channel output in Euclidean distance. In contrast, we show that iterative decoding decodes to the “pseudosignal” that has highest correlation with the channel output. The set of pseudosignals corresponds to “pseudocodewords”, only a vanishingly small number of which correspond to codewords. We show that some pseudocodewords cause decoding errors, but that there are also pseudocodewords that frequently correct the deleterious effects of other pseudocodewords View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Expander graph arguments for message-passing algorithms

    Page(s): 782 - 790
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB) |  | HTML iconHTML  

    We show how expander-based arguments may be used to prove that message-passing algorithms can correct a linear number of erroneous messages. The implication of this result is that when the block length is sufficiently large, once a message-passing algorithm has corrected a sufficiently large fraction of the errors, it will eventually correct all errors. This result is then combined with known results on the ability of message-passing algorithms to reduce the number of errors to an arbitrarily small fraction for relatively high transmission rates. The results hold for various message-passing algorithms, including Gallager's hard-decision and soft-decision (with clipping) decoding algorithms. Our results assume low-density parity-check (LDPC) codes based on an irregular bipartite graph View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient erasure correcting codes

    Page(s): 569 - 584
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discrete-time random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both sides of the graph which is necessary and sufficient for the decoding process to finish successfully with high probability. By carefully designing these graphs we can construct for any given rate R and any given real number ε a family of linear codes of rate R which can be encoded in time proportional to ln(1/ε) times their block length n. Furthermore, a codeword can be recovered with high probability from a portion of its entries of length (1+ε)Rn or more. The recovery algorithm also runs in time proportional to n ln(1/ε). Our algorithms have been implemented and work well in practice; various implementation issues are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation

    Page(s): 657 - 670
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB) |  | HTML iconHTML  

    Density evolution is an algorithm for computing the capacity of low-density parity-check (LDPC) codes under message-passing decoding. For memoryless binary-input continuous-output additive white Gaussian noise (AWGN) channels and sum-product decoders, we use a Gaussian approximation for message densities under density evolution to simplify the analysis of the decoding algorithm. We convert the infinite-dimensional problem of iteratively calculating message densities, which is needed to find the exact threshold, to a one-dimensional problem of updating the means of the Gaussian densities. This simplification not only allows us to calculate the threshold quickly and to understand the behavior of the decoder better, but also makes it easier to design good irregular LDPC codes for AWGN channels. For various regular LDPC codes we have examined, thresholds can be estimated within 0.1 dB of the exact value. For rates between 0.5 and 0.9, codes designed using the Gaussian approximation perform within 0.02 dB of the best performing codes found so far by using density evolution when the maximum variable degree is 10. We show that by using the Gaussian approximation, we can visualize the sum-product decoding algorithm. We also show that the optimization of degree distributions can be understood and done graphically using the visualization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimum-distance bounds by graph analysis

    Page(s): 808 - 821
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB) |  | HTML iconHTML  

    The parity-check matrix of a linear code is used to define a bipartite code constraint (Tanner) graph in which bit nodes are connected to parity-check nodes. The connectivity properties of this graph are analyzed using both local connectivity and the eigenvalues of the associated adjacency matrix. A simple lower bound on the minimum distance of the code is expressed in terms of the two largest eigenvalues. For a more powerful bound, local properties of the subgraph corresponding to a minimum-weight word in the code are used to create an optimization problem whose solution is a lower bound on the code's minimum distance. Linear programming gives one bound. The technique is illustrated by applying it to sparse block codes with parameters [7,3,4] and [42,23,6] View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analyzing the turbo decoder using the Gaussian approximation

    Page(s): 671 - 686
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB) |  | HTML iconHTML  

    We introduce a simple technique for analyzing the iterative decoder that is broadly applicable to different classes of codes defined over graphs in certain fading as well as additive white Gaussian noise (AWGN) channels. The technique is based on the observation that the extrinsic information from constituent maximum a posteriori (MAP) decoders is well approximated by Gaussian random variables when the inputs to the decoders are Gaussian. The independent Gaussian model implies the existence of an iterative decoder threshold that statistically characterizes the convergence of the iterative decoder. Specifically, the iterative decoder converges to zero probability of error as the number of iterations increases if and only if the channel E b/N0 exceeds the threshold. Despite the idealization of the model and the simplicity of the analysis technique, the predicted threshold values are in excellent agreement with the waterfall regions observed experimentally in the literature when the codeword lengths are large. Examples are given for parallel concatenated convolutional codes, serially concatenated convolutional codes, and the generalized low-density parity-check (LDPC) codes of Gallager and Cheng-McEliece (1996). Convergence-based design of asymmetric parallel concatenated convolutional codes (PCCC) is also discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tanner graphs for group block codes and lattices: construction and complexity

    Page(s): 822 - 834
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB) |  | HTML iconHTML  

    We develop a Tanner graph (TG) construction for an Abelian group block code L with arbitrary alphabets at different coordinates, an important application of which is the representation of the label code of a lattice. The construction is based on the modular linear constraints imposed on the code symbols by a set of generators for the dual code L*. As a necessary step toward the construction of a TG for L we devise an efficient algorithm for finding a generating set for L*. In the process, we develop a construction for lattices based on an arbitrary Abelian group block code, called generalized Construction A (GCA), and explore relationships among a group code, its GCA lattice, and their duals. We also study the problem of finding low-complexity TGs for Abelian group block codes and lattices; and derive tight lower bounds on the label-code complexity of lattices. It is shown that for many important lattices, the minimal label codes which achieve the lower bounds cannot be supported by cycle-free Tanner graphs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Probability propagation and decoding in analog VLSI

    Page(s): 837 - 843
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB) |  | HTML iconHTML  

    The sum-product algorithm (belief/probability propagation) can be naturally mapped into analog transistor circuits. These circuits enable the construction of analog-VLSI decoders for turbo codes, low-density parity-check codes, and similar codes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Concatenated tree codes: a low-complexity, high-performance approach

    Page(s): 791 - 799
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB) |  | HTML iconHTML  

    This paper is concerned with a family of concatenated tree (CT) codes. CT codes are special low-density parity check (LDPC) codes consisting of several trees with large spans. They can also be regarded as special turbo codes with hybrid recursive/nonrecursive parts and multiple constituent codes. CT codes are decodable by the belief-propagation algorithm. They combine many advantages of LDPC and turbo codes, such as low decoding cost, fast convergence speed, and good performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient encoding of low-density parity-check codes

    Page(s): 638 - 656
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB) |  | HTML iconHTML  

    Low-density parity-check (LDPC) codes can be considered serious competitors to turbo codes in terms of performance and complexity and they are based on a similar philosophy: constrained random code ensembles and iterative decoding algorithms. We consider the encoding problem for LDPC codes. More generally we consider the encoding problem for codes specified by sparse parity-check matrices. We show how to exploit the sparseness of the parity-check matrix to obtain efficient encoders. For the (3,6)-regular LDPC code, for example, the complexity of encoding is essentially quadratic in the block length. However, we show that the associated coefficient can be made quite small, so that encoding codes even of length n≃100000 is still quite practical. More importantly, we show that “optimized” codes actually admit linear time encoding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs

    Page(s): 736 - 744
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    Graphical models, such as Bayesian networks and Markov random fields (MRFs), represent statistical dependencies of variables by a graph. The max-product “belief propagation” algorithm is a local-message-passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixed point yields the most probable values of the unobserved variables given the observed ones. Good empirical performance has been obtained by running the max-product algorithm (or the equivalent min-sum algorithm) on graphs with loops, for applications including the decoding of “turbo” codes. Except for two simple graphs (cycle codes and single-loop graphs) there has been little theoretical understanding of the max-product algorithm on graphs with loops. Here we prove a result on the fixed points of max-product on a graph with arbitrary topology and with arbitrary probability distributions (discrete- or continuous-valued nodes). We show that the assignment based on a fixed point is a “neighborhood maximum” of the posterior probability: the posterior probability of the max-product assignment is guaranteed to be greater than all other assignments in a particular large region around that assignment. The region includes all assignments that differ from the max-product assignment in any subset of nodes that form no more than a single loop in the graph. In some graphs, this neighborhood is exponentially large. We illustrate the analysis with examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analysis of belief propagation on the turbo decoding graph with Gaussian densities

    Page(s): 745 - 765
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (520 KB) |  | HTML iconHTML  

    Motivated by its success in decoding turbo codes, we provide an analysis of the belief propagation algorithm on the turbo decoding graph with Gaussian densities. In this context, we are able to show that, under certain conditions, the algorithm converges and that-somewhat surprisingly-though the density generated by belief propagation may differ significantly from the desired posterior density, the means of these two densities coincide. Since computation of posterior distributions is tractable when densities are Gaussian, use of belief propagation in such a setting may appear unwarranted. Indeed, our primary motivation for studying belief propagation in this context stems from a desire to enhance our understanding of the algorithm's dynamics in a non-Gaussian setting, and to gain insights into its excellent performance in turbo codes. Nevertheless, even when the densities are Gaussian, belief propagation may sometimes provide a more efficient alternative to traditional inference methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On expander codes

    Page(s): 835 - 837
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (112 KB) |  | HTML iconHTML  

    Sipser and Spielman (see ibid., vol.42, p.1717-22, Nov. 1996) have introduced a constructive family of asymptotically good linear error-correcting codes-expander codes-together with a simple parallel algorithm that will always remove a constant fraction of errors. We introduce a variation on their decoding algorithm that, with no extra cost in complexity, provably corrects up to 12 times more errors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of capacity-approaching irregular low-density parity-check codes

    Page(s): 619 - 637
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB) |  | HTML iconHTML  

    We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on the work of Richardson and Urbanke (see ibid., vol.47, no.2, p.599-618, 2000). Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Zigzag codes and concatenated zigzag codes

    Page(s): 800 - 807
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB) |  | HTML iconHTML  

    This paper introduces a family of error-correcting codes called zigzag codes. A zigzag code is described by a highly structured zigzag graph. Due to the structural properties of the graph, very low-complexity soft-in/soft-out decoding rules can be implemented. We present a decoding rule, based on the Max-Log-APP (MLA) formulation, which requires a total of only 20 addition-equivalent operations per information bit, per iteration. Simulation of a rate-1/2 concatenated zigzag code with four constituent encoders with interleaver length 65 536, yields a bit error rate (BER) of 10-5 at 0.9 dB and 1.3 dB away from the Shannon limit by optimal (APP) and low-cost suboptimal (MLA) decoders, respectively. A union bound analysis of the bit error probability of the zigzag code is presented. It is shown that the union bounds for these codes can be generated very efficiently. It is also illustrated that, for a fixed interleaver size, the concatenated code has increased code potential as the number of constituent encoders increases. Finally, the analysis shows that zigzag codes with four or more constituent encoders have lower error floors than comparable turbo codes with two constituent encoders View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unified design of iterative receivers using factor graphs

    Page(s): 843 - 849
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB) |  | HTML iconHTML  

    Iterative algorithms are an attractive approach to approximating optimal, but high-complexity, joint channel estimation and decoding receivers for communication systems. We present a unified approach based on factor graphs for deriving iterative message-passing receiver algorithms for channel estimation and decoding. For many common channels, it is easy to find simple graphical models that lead directly to implementable algorithms. Canonical distributions provide a new, general framework for handling continuous variables. Example receiver designs for Rayleigh fading channels with block or Markov memory, and multipath fading channels with fixed unknown coefficients illustrate the effectiveness of our approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliable communication over channels with insertions, deletions, and substitutions

    Page(s): 687 - 698
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB) |  | HTML iconHTML  

    A new block code is introduced which is capable of correcting multiple insertion, deletion, and substitution errors. The code consists of nonlinear inner codes, which we call “watermark"” codes, concatenated with low-density parity-check codes over nonbinary fields. The inner code allows probabilistic resynchronization and provides soft outputs for the outer decoder, which then completes decoding. We present codes of rate 0.7 and transmitted length 5000 bits that can correct 30 insertion/deletion errors per block. We also present codes of rate 3/14 and length 4600 bits that can correct 450 insertion/deletion errors per block View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Convergence analysis of turbo decoding of product codes

    Page(s): 723 - 735
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (416 KB) |  | HTML iconHTML  

    Geometric interpretation of turbo decoding has founded an analytical basis, and provided tools for the analysis of this algorithm. We focus on turbo decoding of product codes, and based on the geometric framework, we extend the analytical results and show how analysis tools can be practically adapted for this case. Specifically, we investigate the algorithm's stability and its convergence rate. We present new results concerning the structure and properties of stability matrices of the algorithm, and develop upper bounds on the algorithm's convergence rate. We prove that for any 2×2 (information bits) product codes, there is a unique and stable fixed point. For the general case, we present sufficient conditions for stability. The interpretation of these conditions provides an insight to the behavior of the decoding algorithm. Simulation results, which support and extend the theoretical analysis, are presented for Hamming [(7,4,3)]2 and Golay [(24,12,8)]2 product codes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic programming and the graphical representation of error-correcting codes

    Page(s): 549 - 568
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB) |  | HTML iconHTML  

    Graphical representations of codes facilitate the design of computationally efficient decoding algorithms. This is an example of a general connection between dependency graphs, as arise in the representations of Markov random fields, and the dynamic programming principle. We concentrate on two computational tasks: finding the maximum-likelihood codeword and finding its posterior probability, given a signal received through a noisy channel. These two computations lend themselves to a particularly elegant version of dynamic programming, whereby the decoding complexity is particularly transparent. We explore some codes and some graphical representations designed specifically to facilitate computation. We further explore a coarse-to-fine version of dynamic programming that can produce an exact maximum-likelihood decoding many orders of magnitude faster than ordinary dynamic programming View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering