By Topic

IEEE Quick Preview
  • Abstract

SECTION I

INTRODUCTION

LDPC CODES are known as a class of capacity approaching codes in the sense of Shannon's limit, when decoded with sum-product decoding [1]. Computer experiments have shown LDPC codes can achieve good error-correcting performance. On the other hand, there are no formulas for an accurate error-rate evaluation of sum-product decoding of a given LDPC code. LDPC codes have been chosen for standardization of communication products, for example, DVBs [2], and wireless communication [3]. For practical use, extremely small error-rate is expected. The smaller the error-rate required, the higher computational cost of computer experiments. The ideal goal of our research is to give a formula for an accurate error-rate of a given LDPC code. Therefore obtaining the formula is meaningful not only for theoretical interests, but also for the development of consumer equipment.

One research trend on LDPC codes is the theoretical analysis of code space and its related structures, e.g., “minimum distance” [4], [5], [6], “weight distribution” [7], [8], [9], “stopping set” [10], “trapping sets,” “near codewords” [11], “pseudocodewords” [12], [13], and so on. These approaches provide not only bounds or approximate values of error-rates, but also guidelines for the construction of good LDPC codes.

In this paper, we generalize sum-product decoding by introducing an additional parameter “initialization.” In the process, we introduce the concept of correctable error set. While many of the previously mentioned approaches tend to identify errors that are caused by the suboptimality of iterative decoding, we attempt to characterize errors that are guaranteed to be corrected by iterative decoding.

The paper is organized as follows: In Section II, we briefly review sum-product decoding. In Section III, we introduce the concept of a correctable error set for the BSC by fixing parameters of the sum-product decoding. A few examples of correctable error sets are given. In Section IV, we analyze theoretically the word-error-rate of sum-product decoding from a point of view of multi variable functions. In Section V, we establish a relation between the correctable error set and a set of syndromes. This relation allows us to reduce the computational complexity of determining the correctable error set. Section VI presents another result that reduces computational complexity. It uses the symmetry of the parity-check matrix. In Section VII, we introduce two applications of our results: One gives a relation among the “initialization,” the “iteration,” and the “word-error-rate.” The other is computational complexity reduction for computer experiments for quantum LDPC codes. In Section VIII, we conclude the paper and propose directions for future research.

SECTION II

SUM-PRODUCT DECODING

Consider the following general communication scenario: a sender chooses a codeword Formula$c$ of a binary linear error-correcting code associated with an LDPC matrix Formula$H$ and sends Formula$c$ to a receiver over a noisy channel. If the noisy channel is a binary symmetric channel (BSC) with crossover probability Formula$q$, each bit of the codeword Formula$c$ flips with the probability Formula$q$ and the receiver obtains a bit sequence Formula$y$. The receiver inputs the parameters Formula$H$, Formula$l_{\max}$, Formula$q$ and the received word Formula$y$ to a sum-product decoder Formula${\rm SPDec}$, where Formula$l_{\max}$ is a parameter called the maximum iteration number. The communication succeeds if the output is Formula$c$, and fails otherwise. The word error-rate is the failure probability of the communication scenario. In this paper, we analyze the word error-rate of LDPC codes with sum-product decoding over a BSC.

In this section we review the definition of the sum-product decoding algorithm. We take as input a bit-sequence Formula$s$, which is referred to as a syndrome in the following definition. If we assume Formula$s=0$, the algorithm becomes a standard sum-product decoding.

Let Formula$H=(H_{m,n})_{1\leq m\leq M, 1\leq n\leq N}$ be a binary matrix of size Formula$M\times N$. Define Formula TeX Source $$\eqalignno{&A(m):=\{1\leq n\leq N\vert H_{m,n}=1\},\cr&B(n):=\{1\leq m\leq M\vert H_{m,n}=1\}.}$$ The set Formula$A(m)$ (resp. Formula$B(n)$) is called the column (resp. row) index set of the Formula$m$th row (resp. Formula$n$th column). The sum-product decoding algorithm is performed as follows.

  • Input: a binary matrix Formula$H$, a bit sequence Formula$y=(y_{1}, y_{2},\ldots, y_{N})\in\{0,1\}^{N}$, the crossover probability Formula$p\in [{0,1}]$ of a BSC, an integer Formula$l_{\max}$, and a bit sequence Formula$s=(s_{1},\ldots, s_{M})\in\{0,1\}^{M}$.
  • Output: Formula$c=(c_{1}, c_{2},\ldots, c_{N})\in\{0, 1,\emptyset\}^{N}$.
  • Step 1 (initialize): Let Formula$Q^{0}=(q_{m,n}^{(0)})$, Formula$Q^{1}=(q_{m,n}^{(1)})$, Formula$R^{0}=(r_{m,n}^{(0)})$, and Formula$R^{1}=(r_{m,n}^{(1)})$ be matrices of size Formula$M\times N$. For all Formula$1\leq m\leq M$, Formula$1\leq n\leq N$ with Formula$H_{m, n}=1$, set Formula TeX Source $$q_{m, n}^{(0)}=q_{m,n}^{(1)}=1/2,$$ and set Formula$l=1$.
  • Step 2 (row process): For each Formula$1\leq m\leq M$, Formula$n\in A(m)$ and Formula$i\in\{0, 1\}$, compute Formula TeX Source $$r_{m, n}^{(i)}:=K_{m, n}\sum_{(c_{1},\ldots,c_{N})\in X(m)^{(s_{m})}}\prod_{x\in A(m)\setminus\{n\}}q_{m, x}^{(c_{x})}P(y_{x}\vert c_{x}),$$ where Formula$X(m)^{(i)}:=\{c\in\{0,1\}^{N}\vert c H_{m}^{T}=i\}$, Formula$H_{m}$ is the Formula$m$th row of Formula$H$, and Formula TeX Source $$P(a\vert b):=1-p {\rm if }a=b$$ and Formula TeX Source $$P(a\vert b):=p {\rm if }a\ne b$$ for Formula$a$, Formula$b\in\{0, 1\}$. Formula$K_{m, n}$ is a constant such that Formula$r_{m,n}^{(0)}+r_{m, n}^{(1)}=1$.
  • Step 3 (column process): For each Formula$1\leq n\leq N$, Formula$m\in B(n)$, and Formula$i=0, 1$, compute Formula TeX Source $$q_{m, n}^{(i)}:=K^{\prime}_{m, n}\prod_{x\in B(n)\setminus{m}}r_{x, n}^{(i)},$$ where Formula$K^{\prime}_{m, n}$ is a constant such that Formula$q_{m, n}^{(0)}+q_{m, n}^{(1)}=1$.
  • Step 4 (temporary word): For Formula$1\leq n\leq N$ and Formula$i=0$, 1, compute Formula TeX Source $$Q_{n}^{(i)}:=K^{\prime\prime}_{n}P(y_{n}\vert c_{n}=i)\prod_{x\in B(n)}r_{x, n}^{(i)}$$ and Formula TeX Source $$\eqalignno{{\mathhat{c}}_{n}:=&\,0 {\rm if }Q_{n}^{(0)}>Q_{n}^{(1)},\cr{\mathhat{c}}_{n}:=&\,1 {\rm if }Q_{n}^{(0)}<Q_{n}^{(1)},}$$ and Formula TeX Source $${\mathhat{c}}_{n}:=\emptyset {\rm if }Q_{n}^{(0)}=Q_{n}^{(1)},$$ where Formula$K^{\prime\prime}_{n}$ is a constant such that Formula$Q_{n}^{(0)}+Q_{n}^{(1)}=1$.
  • Step 5 (parity-check): If one Formula${\mathhat{c}}_{n}$ is Formula$\emptyset$, go to Step 6. If Formula$({\mathhat{c}}_{1},\ldots,{\mathhat{c}}_{N}) H^{T}=s^{T}$, output Formula$({\mathhat{c}}_{1},\ldots,{\mathhat{c}}_{N})$ and stop the algorithm.
  • Step 6 (count the iteration number): If Formula$l<l_{\max}$, increment Formula$l$ and go to Step 2. If Formula$l=l_{\max}$, output Formula$({\mathhat{c}}_{1},\ldots,{\mathhat{c}}_{N})$ and stop the algorithm.

Remark II.1

In general, an input Formula$p$ may be different from the channel crossover probability Formula$q$ for various reasons. For example, Formula$p$ might be a quantized value of Formula$q$ or the accuracy of Formula$q$ could not be determined.

Remark II.2

In Step 4, we assume that the temporary bit is Formula$\emptyset$ if Formula$Q_{n}^{(0)}=Q_{n}^{(1)}$. This assumption is required to properly define the correctable error set, which is one of the motivations of this paper (see Section III).

Remark II.3

Another popular definition for sum-product replaces Steps 2 and 3 with

  • Step 2' (row process): For each Formula$1\leq m\leq M$, Formula$n\in A(m)$ and Formula$i\in\{0, 1\}$, compute Formula TeX Source $$r_{m, n}^{(i)}:=K_{m, n}\sum_{(c_{1},\ldots,c_{N})\in X(m)^{(i\oplus s_{m})}}\prod_{x\in A(m)\setminus\{n\}}q_{m, x}^{(c_{x})},$$ where Formula$X(m)^{(i)}:=\{c\in\{0, 1\}^{N}\vert c H_{m}^{T}=i\}$, Formula$H_{m}$ is the Formula$m$th row of Formula$H$, Formula$\oplus$ is the XOR operation, and Formula TeX Source $$P(a\vert b)=1-p {\rm if }a=b$$ and Formula TeX Source $$P(a\vert b)=p {\rm if }a\ne b$$ for Formula$a$, Formula$b\in\{0, 1\}$. Formula$K_{m, n}$ is a constant such that Formula$r_{m, n}^{(0)}+r_{m, n}^{(1)}=1$.
  • Step 3' (column process): For each Formula$1\leq n\leq N, m\in B(n), {\rm and} i=0$, 1, compute Formula TeX Source $$q_{m,n}^{(i)}:=K^{\prime}_{m, n}P(y_{n}\vert c_{n}=i)\prod_{x\in B(n)\setminus{m}}r_{x, n}^{(i)},$$ where Formula$K^{\prime}_{m, n}$ is a constant such that Formula$q_{m, n}^{(0)}+q_{m, n}^{(1)}=1$.

These replacements define an equivalent algorithm, i.e., the two algorithm always produce the same output.

SECTION III

CORRECTABLE ERROR SET

Assuming Formula$s=0$, we denote the output of the sum-product decoding as Formula${\rm SPDec}[H, y, p, l_{\max}]$. Then for a linear code the following statements are equivalent:

  • Formula${\rm SPDec}[H, y, p, l_{\max}]=0$,
  • Formula${\rm SPDec}[H, c+y, p, l_{\max}]=c$,

where Formula$c$ is any codeword defined by Formula$H$. It suggests that we can define the correctable error set for an LDPC code under fixed initialization Formula$p$ and maximal iteration number Formula$l_{\max}$ as the set of error patterns corrected by Formula${\rm SPDec}$. Let the correctable error set be denoted by Formula${\cal E}_{H, p, l_{\max}}$. Note that if we change Formula$p$ or Formula$l_{\max}$, the correctable set Formula${\cal E}_{H, p, l_{\max}}$ changes. Significantly, Formula${\cal E}_{H, p, l_{\max}}$ is independent of the crossover probability Formula$q$ of the channel.

Example III.1

Let the parity-check matrix Formula$H_{1}$ be the Formula$10\times 20$ matrix Formula TeX Source $$H_{1}=\left(\matrix{10000100001000010000\cr01000010000100001000\cr 00100001000010000100\cr00010000100001000010\cr00001000010000100001\cr10000010000010000010\cr 01000001000001000001\cr00100000100000110000\cr00010000011000001000\cr00001100000100000100}\right).$$

We can obtain the correctable error set Formula${\cal E}_{H_{1}, p, l_{\max}}$ by exhaustive search for fixed Formula$p$ and Formula$l_{\max}$.

Case Formula$p=0.258$ and Formula${l}_{\max}=5$: Formula${\cal E}_{H_{1}, 0.258, 5}=\{0\}$, where 0 is the all-zero vector.

Case Formula$p=0.258$ and Formula${l}_{\max}=16$: Formula${\cal E}_{H_{1}, 0.258, 16}=\{0\}$.

Case Formula$p=0.257$ and Formula${l}_{\max}=5$: Formula${\cal E}_{H_{1}, 0.257, 5}=\{0\}$.

Case Formula$p=0.257$ and Formula${l}_{\max}=16$: Formula${\cal E}_{H_{1}, 0.257, 16}=\{0,e_{1},\ldots, e_{20}\}\cup E$, where Formula$e_{i}$ is the Formula$i$th unit vector, and Formula TeX Source $$\eqalignno{E=&\,\{(10000000000000000001),(00000010000000100000),\cr&(01000000000000010000),(00000000010010000000),\cr&(00000001001000000000),(00100000000000001000),\cr&(00001000000000000010),(00000100000001000000),\cr&(00000000100100000000),(00010000000000000100)\}.}$$

Case Formula$p=0.220$ and Formula${l}_{\max}=5$: Formula${\cal E}_{H_{1}, 0.220, 5}=\{0,e_{1},\ldots, e_{20}\}\cup E$.

Case Formula$p=0.220$ and Formula${l}_{\max}=16$: Formula${\cal E}_{H_{1}, 0.220, 16}=\{0,e_{1},\ldots, e_{20}\}\cup E$.

The cardinality of Formula${\cal E}_{H_{1}, p, l_{\max}}$ for Formula$l_{\max}=16$ is plotted in Fig. 1 as a function of Formula$p$. For this case, we observe that the best values of Formula$p$ represented in Fig. 1 are in the set Formula$\{0.019, 0.020, 0.021, 0.022\}$. Formula$\hfill{\square}$

Figure 1
Fig. 1. Cardinality of correctable error sets for Formula$l_{\max}=16$ and Formula$0<p_{0}<0.5$.

Example III.2

Table I shows the weight distribution of the correctable error set for an array type (3,11) LDPC code, known as an FSA code [6] of type (3, 11), which is a quasi-cyclic LDPC code with base model matrix Formula$(m_{i,j})_{1\leq i\leq 3, 1\leq j\leq 11}$, Formula$m_{i,j}:=(i-1)\times (j-1)$ and circulant size 11, obtained with Formula$l_{\max}=16$, and initialization Formula$p=0.01$. This code is of length 121 Formula$(=11\times 11)$. We observe that for this value of Formula$p$, Formula${\rm SPDec}$ achieves bounded distance decoding since, for this code, its minimum distance is 6. Furthermore, 87% of weight 3 error patterns are also correctable.

Table 1
TABLE I WEIGHT DISTRIBUTION OF THE CORRECTABLE ERROR SET FOR AN ARRAY TYPE (3, 11) LDPC CODE, Formula$l_{\max}=16$, INITIALIZATION Formula$p=0.01$

From Table I, we observe that the error of weight 121 is correctable for the array type (3, 11) LDPC code. In fact, this phenomena occurs under Formula$0\leq p<0.5$ and Formula$l_{\max}\geq 1$ if each row-weight of the parity-check matrix is odd. The reason is the following: In Step 2 of Formula${\rm SPDec}$, Formula$r_{m, n}^{(i)}$ is obtained from Formula$x\in A(m)\setminus\{n\}$. Under the assumption “the row-weight is odd,” the value Formula$r_{m,n}^{(i)}$ for the all-1 error is the same as the value for the all-0 error. Therefore, the all-1 error is correctable, “without iteration.”

SECTION IV

THEORETICAL ANALYSIS ON Formula${\rm WER}(p,q)$

In Section III, it was observed that the correctable error set Formula${\cal E}_{H, p, lmax}$ was independent of the BSC crossover probability Formula$q$. This suggests introducing another value Formula$p$ to Formula${\rm SPDec}$. We call Formula$p$ the initialization of the decoder. As a result, the word-error-rate is regarded as a function of two variables Formula$p$ and Formula$q$ for fixed parameters Formula$H$ and Formula$l_{\max}$. Let it be denoted by Formula${\rm WER}_{H, l_{\max}}(p,q)$. With this notation, the word-error-rate of the original sum-product decoding is equal to Formula${\rm WER}_{H, l_{\max}}(q,q)$. The following theorem presents properties of Formula${\rm WER}_{H, l_{\max}}(p,q)$:

Theorem IV.1

Let Formula$H$ be a parity-check matrix and Formula${\rm WER}_{H, l_{\max}}(p,q)$ the word-error-rate for an initialization Formula$p$, a crossover probability Formula$q$, and the maximum iteration number Formula$l_{\max}$. Then we have the following:

  1. For a fixed initialization Formula$p_{0}$, Formula${\rm WER}(p_{0}, q)$ is a polynomial in Formula$q$.
  2. For any initialization Formula$p$, Formula${\rm WER}(p, 1/2)\times 2^{N}$ is an integer, where Formula$N$ is the number of columns of Formula$H$. Furthermore, Formula${\rm WER}(p, 1/2)$ is a discrete function of Formula$p$.
  3. For some parity-check matrix Formula$H$, Formula${\rm WER}(p,p)$, the word-error-rate of the original sum-product decoding, is not a continuous function of Formula$p$.

Remark IV.1

Theorem IV.1 3) implies that there may be no polynomial representation of the word-error-rate Formula${\rm WER}(p, p)$ as a function of Formula$p$ for some LDPC codes. On the other hand, by Theorem IV.1 1) for a fixed initialization Formula$p_{0}$, there exists a polynomial representation of the word-error-rate Formula${\rm WER}(p_{0}, q)$ as a function of Formula$q$ for any LDPC code. This fact motivates us to investigate sum-product decoding with a fixed initialization.

Before the proof is given, we recall the weight enumerator as follows: For a set Formula${\cal E}\subset\{0,1\}^{N}$, the weight enumerator with variables Formula$X$ and Formula$Y$ is defined as Formula TeX Source $$A_{\cal E}[X, Y]:=\sum_{e\in{\cal E}}X^{N-{\rm wt}(e)}Y^{{\rm wt}(e)},$$ where Formula${\rm wt}(e)$ is the Hamming weight of Formula$e$.

Proof for Theorem IV.1
  1. As we pointed out, it is possible to define the correctable error set Formula${\cal E}_{H, p_{0}, l_{\max}}$ by fixing Formula$H$, Formula$p_{0}$, Formula$l_{\max}$. Let Formula$A_{{\cal E}_{H, p_{0}, l_{\max}}}[X, Y]$ be the weight enumerator of Formula${\cal E}_{H, p_{0}, l_{\max}}$. The probability that the error-vector is contained in Formula${\cal E}_{H, p_{0}, l_{\max}}$ is equal to Formula$A_{{\cal E}_{H, p_{0}, l_{\max}}}[1-q, q]$ over a BSC with crossover probability Formula$q$. This implies that Formula TeX Source $${\rm WER}(p_{0}, q)=1-A_{{\cal E}_{H, p_{0}, l_{\max}}}[1-q, q],$$ which is a polynomial of Formula$q$.
  2. By the argument above, we have Formula TeX Source $${\rm WER}(p, 1/2)=1-A_{\cal E}[1/2, 1/2].$$ On the other hand, Formula TeX Source $$A_{\cal E}[1/2, 1/2]=\sum_{e\in{\cal E}}(1/2)^{N}=\vert{\cal E}\vert\times2^{-N},$$ where Formula$\vert A\vert$ is the cardinality of the set Formula$A$. Therefore, Formula TeX Source $${\rm WER}(p, 1/2)\times 2^{N}=2^{N}-\vert{\cal E}\vert,$$ which is an integer.
  3. Define a finite set Formula${\cal A}\subset\BBZ [X, Y]$ as Formula TeX Source $${\cal A}:=\left\{\sum{a_{i}}X^{N-i}Y^{i}\vert a_{0}=1, 0\leq a_{i}\leq 2^{N}\right\},$$ where Formula$\BBZ [X, Y]$ is the ring of polynomials in Formula$X$ and Formula$Y$ over the integers. Then any weight enumerator of a correctable error set is an element of Formula${\cal A}$, since Formula TeX Source $${\rm SPDec}[H, (0, 0, \ldots, 0), p, l_{\max}]=0\eqno{\hbox{(1)}}$$ for any Formula$H$, Formula$0<p<1/2$, Formula$l_{\max}>0$, and there is only one vector of Hamming weight zero so that Formula$a_{0}=1$. Note that the cardinality of Formula$\{0,1\}^{N}$ is Formula$2^{N}$. Therefore, the cardinality of a correctable error set is at most Formula$2^{N}$. This implies Formula TeX Source $$a_{i}\leq 2^{N}.$$

The word-error-rate Formula${\rm WER}(q, q)$ over a BSC with crossover probability Formula$q$ is represented by Formula TeX Source $${\rm WER}(q, q)=1-f_{i}[1-q, q]$$ for some Formula$f_{i}[1-q,q]\in{\cal A}$. Since the element Formula$f_{i}[1-q,q]$ is a polynomial, Formula$f_{i}[1-q, q]$ is a function of Formula$q$. Furthermore, each Formula$1-f_{i}[1-q, q]$ is a continuous function.

Next, we observe the discontinuity property of Formula$H_{1}$ in Example III.1 with Formula$l_{\max}=5$, 16. We have Formula TeX Source $${\cal E}_{H_{1}, 0.258, l_{\max}}=\{(0,0,\ldots,0)\}.$$ Its weight enumerator is Formula TeX Source $$f_{0}=X^{20}.$$ Then Formula TeX Source $${\rm WER}(0.258)=1-f_{0}[1-0.258, 0.258].$$ Remember that we have Formula TeX Source $${\cal E}_{H_{1}, 0.220, l_{\max}}\ne\{(0,0,\ldots,0)\}.$$ Denoting the weight enumerator at Formula$q=0.220$ by Formula$f_{1}$, we obtain Formula TeX Source $$f_{1}=X^{20}+20 X^{19}Y+10 X^{18}Y^{2}.$$ this implies Formula$f_{0}\ne f_{1}$.

On the other hand, Formula$f_{0}[1-q, q]$ never crosses another Formula$f[1-q, q]$, Formula$f\in{\cal A}$ in the interval (0, 0.5), since the coordinates of Formula$f[X,Y]-f_{0}[X,Y]$ are nonnegative and there exists at least one positive coordinate. By (1), every Formula$f[X, Y]$ has the form Formula TeX Source $$f[X, Y]=X^{20}+\sum_{1\leq i\leq 20}a_{i}X^{20-i}Y^{i}$$ where Formula$a_{i}$ is a nonnegative integer. Since Formula$f_{0}[X, Y]=X^{20}$, we have Formula TeX Source $$f[X, Y]-f_{0}[X, Y]=\sum_{1\leq i\leq 20}a_{i}X^{20-i}Y^{i}.$$ Note that Formula$f[X, Y]-f_{0}[X, Y]=0$ if and only if Formula$f_{0}[X, Y]=f[X, Y]$. Remember that Formula TeX Source $$\eqalignno{f_{0}[X, Y]=&\,1-{\rm WER}(0.258),\cr f_{1}[X,Y]=&\,1-{\rm WER}(0.220),}$$ and Formula TeX Source $$f_{1}[X,Y]\ne f_{0}[X, Y].$$ Therefore, the word-error-rate is discontinuous in the interval Formula$(0.220, 0.258)$. Formula$\blackboxfill$

Remark IV.2

From the proof of 2), we have Formula$\vert{\cal E}\vert=2^{N}\times (1-{\rm WER}(p,1/2))$. Therefore, we can easily read the word-error-rate Formula${\rm WER}(p,1/2)$ from the graph of the cardinalities of the correctable error set (see Fig. 1).

Remark IV.3

The key point of the proof for 3) is not only the different Formula$f_{i}$'s but also the finiteness of the cardinality of the set Formula${\cal A}$. In fact, if the cardinality is infinite, it is possible for Formula$f$ to be continuous. For example, define Formula TeX Source $${\cal A}=\{f_{i}\vert f_{i}(x):=i\;\; (0\leq i, x\leq 1)\}$$ as a set of constant functions. Although Formula$f_{i}\ne f_{j}$ for all Formula$i\ne j$, the function Formula TeX Source $$f(x):=f_{x}(x)=x$$ is continuous.

The finiteness of Formula${\cal A}$ in the proof 3) is due to the finiteness of the set of error vectors. This is one of the reasons we assume the communication channel is a BSC.

SECTION V

SYNDROME DECODING AND CORRECTABLE ERROR SET

From Theorem IV.1 2), the word-error-rate of fixed initialization decoding (FID) is closely related to the cardinality of the correctable error set. In this section, we discuss properties of correctable error sets.

We denote the output of the sum-product syndrome decoding by Formula${\rm SynDec}[H, y, p, l_{\max}, s]$, which emphasizes the input Formula$s$, called a syndrome.

Theorem V.1

For any Formula$H$, Formula$y$, Formula$p$, Formula$l_{\max}$, the following two statements are equivalent:

  • Formula${\rm SPDec}[H, y, p, l_{\max}]=0$
  • Formula${\rm SynDec}[H, 0, p, l_{\max}, H y^{T}]=y$

Theorem V.1 follows from the following proposition:

Proposition V.1

Let Formula$Q$ and Formula$R$ be the matrices updated by Formula${\rm SPDec}[H, y, p, l_{\max}]$ and let Formula$\bar{Q}$ and Formula$\bar{R}$ be the matrices updated by Formula${\rm SynDec}[H, 0, p, l_{\max}, H y^{T}]$.

For any Formula$1\leq m\leq M$, Formula$1\leq n\leq N$ and Formula$1\leq l\leq l_{\max}$, the following holds over a binary symmetric channel: Formula TeX Source $$q_{m,n}^{(i)}=\bar{q}_{m,n}^{(i\oplus y_{n})},r_{m,n}^{(i)}=\bar{r}_{m,n}^{(i\oplus y_{n})}, Q_{n}^{(i)}=\bar{Q}_{n}^{(i\oplus y_{n})}.$$

Proof for Proposition V.1

The key idea of the proof follows from the observation of the two decoders Formula${\rm SPDec}$ and Formula${\rm SynDec}$ simultaneously.

For Step 1 (initialize), we remark that Formula TeX Source $$q_{m,n}^{(i)}=\bar{q}_{m,n}^{(i\oplus y_{n})}=1/2,r_{m,n}^{(i)}=\bar{r}_{m,n}^{(i\oplus y_{n})}=1/2.$$

For Formula$1\leq l\leq l_{\max}$, we prove recursively that the proposition holds at the Formula$(l+1)$th iteration assuming it holds at the Formula$l$th iteration.

For Step 2 (row process), since the channel is a binary symmetric channel, Formula TeX Source $$\eqalignno{r_{m,n}^{(i)}=&\, K_{m,n}\sum_{(c_{1},\ldots,c_{N})\in X(m)^{0}}\prod_{x\in A(m)\setminus\{n\}}q_{m,x}^{(c_{x})}P(y_{x}\vert c_{x})\cr=&\, K_{m,n}\sum_{(c_{1},\ldots, c_{N})\in X(m)^{0}}\prod_{x\in A(m)\setminus\{n\}}q_{m, x}^{(c_{x})}P(0\vert c_{x}\oplus y_{x}).}$$ Since Formula$q_{m,x}^{(c_{x})}=\bar{q}_{m,x}^{(c_{x})}$ holds on the previous iteration, the last term is Formula TeX Source $$K_{m,n}\sum_{(c_{1},\ldots, c_{N})\in X(m)^{0}}\prod_{x\in A(m)\setminus\{n\}}\bar{q}_{m, x}^{(c_{x}\oplus y_{x})}P(0\vert c_{x}\oplus y_{x}).$$ Since Formula$H y^{T}=s$, the term above is equal to Formula TeX Source $$K_{m,n}\mkern-12mu\sum_{(c_{1}\oplus y_{1},\ldots, c_{N}\oplus y_{N})\in X(m)^{s_{m}}}\prod_{x\in A(m)\setminus\{n\}}\mkern-12mu\bar{q}_{m,x}^{(c_{x}\oplus y_{x})}P(0\vert c_{x}\oplus y_{x}).$$ By the definition of Formula$\bar{r}_{m,n}^{(i\oplus y_{n})}$, it is equal to Formula$\bar{r}_{m,n}^{(i\oplus y_{n})}$.

For Step 3 (column step), since Formula$r_{x,n}^{(i)}=\bar{r}_{x,n}^{(i\oplus y_{n})}$ holds as it is shown above, Formula TeX Source $$\eqalignno{q_{m,n}^{(i)}=&\, K^{\prime}_{m,n}\prod_{x\in B(n)\setminus\{m\}}r_{x,n}^{(i)}\cr=&\, K^{\prime}_{m,n}\prod_{x\in B(n)\setminus\{m\}}\bar{r}_{x,n}^{(i\oplus y_{n})}\cr=&\,\bar{q}_{m,n}^{(i\oplus y_{n})}.}$$

For Step 4 (temporary word), since Formula$r_{x,n}^{(i)}=\bar{r}_{x,n}^{(i\oplus y_{n})}$ holds as shown above, Formula TeX Source $$\eqalignno{Q_{n}^{(i)}=&\, K^{\prime\prime}_{n}P(y_{n}\vert c_{n}=i)\prod_{x\in B(n)}r_{x,n}^{(i)}\cr=&\, K^{\prime\prime}_{n}P(0\vert c_{n}=i\oplus y_{n})\prod_{x\in B(n)}\bar{r}_{x,n}^{(i\oplus y_{n})}\cr=&\,\bar{Q}_{n}^{(i)}.}$$ Therefore, we obtain the proposition. Formula$\blackboxfill$

Proof of Theorem V.1

At the Formula$l_{0}$th iteration of the decoding, we have Formula${\rm SPDec}[H, y, p, l_{\max}]=0$

  • Formula$\iff$ for Formula$1\leq l<l_{0}$, parity-check is not satisfied at the Formula$l$th iteration and at the Formula$l_{0}$th iteration we have Formula$Q_{n}^{(0)}>Q_{n}^{(1)}$ for all Formula$1\leq n\leq N$.
  • Formula$\iff$ for Formula$1\leq l<l_{0}$, parity-check is not satisfied at the Formula$l$th iteration and at the Formula$l_{0}$th iteration we have Formula$\bar{Q}_{n}^{(y_{n})}>\bar{Q}_{n}^{(1\oplus y_{n})}$ for all Formula$1\leq n\leq N$ by Proposition V.1.
  • Formula$\iff$ at the Formula$l_{0}$th iteration, we have Formula${\rm SynDec}[H, 0, p, l_{\max}, H y^{T}]=y$.

Formula$\blackboxfill$

Corollary V.2

For Formula$H$, Formula$p$, and Formula${l}_{\max}$, Formula TeX Source $$\eqalignno{&{\cal E}_{H, p,l_{\max}}\cr=&\,\{y\!\in\!\{0,1\}^{N}\!\mid\!{\rm SynDec}[H, 0, p,l_{\max}, s]\!=\!y,\exists s\!\!\in\{0,1\}^{M}\}.}$$

We call a syndrome Formula$s\in\{0, 1\}^{M}$ a decodable syndrome if there exists Formula$y\in{\cal E}_{H, p, l_{\max}}$ such that Formula${\rm SynDec}[H, 0, p, l_{\max}, s]=y$. Let us denote the set of the decodable syndromes by Formula${\cal D}_{H, p, l_{\max}}$.

Thanks to Corollary V.2, we can determine the correctable error set if its co-dimension Formula$N-K$ is small. For example, for the code of Example III.2, Formula$N=121$ and Formula$K=90$, so that it is impossible to perform an exhaustive search to determine the correctable error set. However, the co-dimension of the parity-check matrix is Formula$N-K=31$. Therefore it is possible to obtain Table I, by Corollary V.2.

Table I allows us to obtain a theoretical formula of Formula${\rm WER}(0.01, q)$ with Formula${l}_{\max}=16$ for a FSA code of type (3, 11). Fig. 2 compares the theoretical values and the computer experimental results of the word-error-rate over a BSC. Assuming an AWGN channel with one-bit quantized output, the SNR values 4.0, 5.0, 6.0, 7.0, and 8.0 dB correspond to crossover probabilities Formula$q=\,$0.026615, 0.015044, 0.007475, 0.003162, and 0.001093, respectively. Note that numerical calculations are performed in floating point in the computer experiments, while our analysis for the sum-product decoding is theoretical. We observe that the theoretical and experimental values match well, as expected. Since we have a theoretical formula as a polynomial in Formula$q$, it is possible to calculate word-error-rates at high SNR values. For example, Formula${\rm WER}(0.01, 0.00001)=9.04\times 10^{-12}$.

Figure 2
Fig. 2. Comparison of theoretical values and computer experimental results of WER with Formula$p=0.01$ and Formula$l_{\max}=16$.
SECTION VI

GRAPH AUTOMORPHISM AND CORRECTABLE ERROR SET

Let Formula$H$ be an Formula$M\times N$ low-density parity-check matrix and let Formula$\sigma$ be an Formula$M\times M$ permutation matrix on the index set Formula$[M]:=\{1, 2,\ldots, M\}$. The permutation Formula$\sigma$ acts naturally on the columns of Formula$H$. Let Formula$\sigma H$ denote the permuted matrix of Formula$H$ by Formula$\sigma$, and let Formula$\sigma s$ denote the permuted vector of a column vector Formula$s$ by Formula$\sigma$. Similarly, Formula$\tau$ denotes an Formula$N\times N$ permutation matrix on the index set Formula$[N]$, which acts on the rows of Formula$H$. Let Formula$H\tau$ denote the permuted matrix of Formula$H$ by Formula$\tau$, and let Formula$s\tau$ denote the permuted vector of a row vector Formula$s$ by Formula$\tau$.

The following is a natural observation:

Proposition VI.1

Let Formula$0\leq p\leq 1$ and Formula$l_{\max}$ an positive integer. For any sequence Formula$y\in\BBF_{2}^{N}$, the following are equivalent:

  1. Formula${\rm SPDec}[H, y, p, l_{\max}]=0$,
  2. Formula${\rm SPDec}[\sigma H, y, p, l_{\max}]=0$,
  3. Formula${\rm SPDec}[H\tau, y\tau, p, l_{\max}]=0$,

for any permutation Formula$\sigma$ for the row index set of Formula$H$ and any permutation Formula$\tau$ for the column index set of Formula$H$.

Proof

The equivalence of 1. and 2. is obtained directly from the definition of the sum-product decoding, since any row permutation for a parity-check matrix does not change the temporary word in Step 4.

The equivalence of 1. and 3. is obtained by the following equality: Formula TeX Source $${\rm SPDec}[H\tau, y\tau, p, l_{\max}]={\rm SPDec}[H, y, p, l_{\max}]\tau.$$ The equality follows from the definition of the sum-product decoding. Formula$\blackboxfill$

In the last section, we discussed the relation between syndrome decoding and the correctable error set. The following is a similar statement to Proposition VI.1.

Proposition VI.2

Let Formula$0\leq p\leq 1$ and Formula$l_{\max}$ a positive integer. For any sequence Formula$y\in\BBF_{2}^{N}$, the following are equivalent:

  • Formula${\rm SynDec}[H, 0, p, l_{\max}, Hy^{T}]=y$,
  • Formula${\rm SynDec}[H\tau, 0, p, l_{\max}, Hy^{T}]=y\tau$,

for any permutation Formula$\tau$ of columns of Formula$H$.

Proof

The proof is a similar to that of Proposition VI.1. Formula$\blackboxfill$

The parity-check matrix Formula$H$ of an LDPC code can be characterized by a bipartite graph Formula$([M],[N], H)$ with vertex sets Formula$[M]=\{1, 2,\ldots, M\}$ and Formula$[N]=\{1, 2,\ldots, N\}$. This bipartite graph is called the Tanner graph of Formula$H$. Therefore, it is natural that an automorphism of an LDPC code is a graph automorphism of its Tanner graph, although an automorphism of a linear code is defined as an index permutation which stabilizes the code space. We define the automorphism Formula$(\sigma,\tau)$ of an LDPC code with parity-check matrix Formula$H$ as a pair of index permutations on Formula$[M]$ and Formula$[N]$ which satisfies Formula TeX Source $$\sigma^{-1}H\tau=H.$$ If we define the product of automorphisms Formula$(\sigma_{1},\tau_{1})$ and Formula$(\sigma_{2},\tau_{2})$ by Formula TeX Source $$(\sigma_{1},\tau_{1})\times (\sigma_{2},\tau_{2}):=(\sigma_{1}\sigma_{2},\tau_{1}\tau_{2}),$$ The automorphisms constitute a finite group Formula${\rm Aut}(H)$. Note that we regard Formula$\sigma$ and Formula$\tau$ as permutation matrices of size Formula$M\times M$ and Formula$N\times N$ respectively, since they act on Formula$H$ as index permutations.

Since an automorphism stabilizes the Tanner graph, we obtain the following result:

Proposition VI.3

Let Formula$H$ be a parity-check matrix. Let Formula${\cal E}_{H, p, l_{\max}}$ (resp. Formula${\cal D}_{H, p, l_{\max}}$) be a correctable error set (resp. a decodable syndrome set) with initialization Formula$p$ and maximal iteration number Formula$l_{\max}$.

  1. For any error vector Formula$y$ and any automorphism Formula$(\sigma,\tau)\in{\rm Aut}(H)$, Formula TeX Source $$y\in{\cal E}_{H, p, l_{\max}}\iff y\tau\in{\cal E}_{H, p,l_{\max}}.$$
  2. For any syndrome Formula$s$ and any automorphism Formula$(\sigma,\tau)\in{\rm Aut}(H)$, Formula TeX Source $$s\in{\cal D}_{H, p, l_{\max}}\iff\sigma s\in{\cal D}_{H, p,l_{\max}}.$$
Proof
  • .1.) By the definition of a correctable error set, Formula TeX Source $$\eqalignno{& y\in{\cal E}_{H, p, l_{\max}}\cr\iff &{\rm SPDec}[H, y, p, l_{\max}]=0.}$$ By Proposition VI.1, Formula TeX Source $$\eqalignno{&{\rm SPDec}[H, y, p, l_{\max}]=0\cr\iff &{\rm SPDec}[H\tau, y\tau, p, l_{\max}]=0.}$$ Since Formula$(\sigma,\tau)$ is a automorphism of Formula$H$, Formula TeX Source $$\eqalignno{&{\rm SPDec}[H\tau, y\tau, p, l_{\max}]=0\cr\iff&{\rm SPDec}[\sigma H, y\tau, p, l_{\max}]=0.\cr&}$$ By applying Proposition VI.1 again, Formula TeX Source $$\eqalignno{{\rm SPDec}[\sigma H, y\tau, p, l_{\max}]=0\iff&{\rm SPDec}[H, y\tau, p, l_{\max}]=0\cr\iff & y\tau\in{\cal E}_{H, p, l_{\max}}}$$
  • .2.) Let Formula$s\in{\cal D}_{H, p, l_{\max}}$ and Formula$y={\rm SynDec}[H, 0, p, l_{\max}, s]$ Now we have Formula$y\in{\cal E}_{H, p, l_{\max}}$ and Formula$H y^{T}=s$. By 1.), Formula$y\in{\cal E}_{H, p, l_{\max}}$ implies Formula$y\tau\in{\cal E}_{H, p, l_{\max}}$. Therefore, Formula$H (y\tau)^{T}\in{\cal D}_{H, p, l_{\max}}$. If Formula$(\sigma,\tau)$ is an automorphism on Formula$H$, (i.e., Formula$\sigma^{-1}H\tau=H$,) then Formula$(\sigma^{-1},\tau^{-1})$ is an automorphism too, (i.e., Formula$H=\sigma H\tau^{-1}$). Note that Formula$\tau^{T}=\tau^{-1}$, since Formula$\tau$ is a permutation matrix.

Therefore, Formula TeX Source $$H (y\tau)^{T}=H\tau^{-1}y^{T}=(\sigma H) y^{T}=\sigma s.$$ Thus Formula$\sigma s\in{\cal D}_{H, p, l_{\max}}$. Formula$\blackboxfill$

Proposition VI.3 generalizes the main theorem of [16] that “an error vector obtained by a quasi-cyclic permutation of a correctable error is also a correctable error.” Matsunaga et.al. pointed out in [16] that this result reduces the computational costs of computer experiments for calculating bit error-rates. We point out this idea is applicable to reduce the computational costs for determining the correctable error set. In group theory, Formula TeX Source $$O_{x}:=\{gx\in\BBF_{2}^{M}\vert g\in{\rm Aut}(H)\}$$ is called an orbit of Formula$x\in\BBF_{2}^{M}$. The orbits Formula$\{O_{x}\}$ give a partition of Formula$\BBF_{2}^{M}$. Therefore the computational cost for determining a correctable error is reduced to the number of orbits.

Example VI.4

Let Formula$H$ be a parity-check matrix of size Formula$M\times N$ such that columns consist of all of weight two vectors in Formula$\BBF_{2}^{M}$, where Formula$N=M(M-1)/2$. Then the set of check node permutations Formula$\tau$ for its Tanner graph is the symmetric group Formula$S_{M}$ of degree Formula$M$, in other words, Formula$S_{M}$ acts on Formula$[M]$. The orbits consist of Formula$O_{0}, O_{1},\ldots, O_{M}$, where Formula$O_{i}=\{x\in\BBF_{2}^{M}\vert{\rm wt}(x)=i\}$. Therefore we can determine the correctable error set by Formula$M+1$ computer experiments.

In general, the number of orbits can be determined from the following theorem.

Theorem VI.5 (Burnside's Lemma) [17]

Let Formula$G$ be a finite group and Formula$[M]$ a set such that Formula$G$ acts on Formula$[M]$. Then the number of the orbits of Formula$[M]$ by Formula$G$ is Formula TeX Source $${{1}\over{\vert G\vert}}\sum_{\tau\in G}\#\{m\in M\mid\tau m=m\}.$$

As a direct corollary of Theorem VI.5, we can count how many syndromes are sufficient to determine the correctable error set for an FSA code of type Formula$(3, p)$. This number is Formula TeX Source $${{2^{3p-2}+(p-1)(3\cdot 2^{p}-2p+4)}\over{p^{2}}},$$ which represents roughly a reduction by a factor Formula$p^{2}$ with respect to the typical number of syndromes Formula$2^{3p-2}$. For example, for Formula$p=11$, this number is 17, 748, Formula$308\approx 2^{24.1}$. It is significantly smaller than the number of syndromes of an FSA code of type (3, 11), i.e., Formula$2^{31}$.

SECTION VII

APPLICATIONS

A. Communications Channels

We performed experiments of our decoding method, with FID for MacKay code of length 504 and of rate 0.5 [14] (See Table II). FID with Formula$p=0.10$ and Formula$l_{\max}=10$ always outperforms the original sum-product decoding for crossover probabilities 0.02, 0.03, 0.04, and 0.05. In the case Formula$l_{\max}=10$ the difference between FID with Formula$p=0.10$ and SPDec is larger than the one with Formula$l_{\max}=30$.

Table 2
TABLE II MACKAY (504, 252) CODE: WORD-ERROR-RATE BY SUM-PRODUCT DECODING AND FID Formula$p=0.02$, 0.10, WITH Formula$l_{\max}=10$, 20, AND 30, CROSS-OVER PROBABILITY Formula$q=0.02$, 0.03, 0.04, AND 0.05

Table III summarizes another experimental result for the different set cycle (DSC) code (273, 191) [15].

Table 3
TABLE III DSC (273, 191) CODE: WORD-ERROR-RATE BY SUM-PRODUCT DECODING AND FIXED INITIALIZATION DECODING Formula$p=0.02$, 0.07, 0.01, WITH MAXIMAL ITERATION Formula$l_{\max}=10$ AND 100, CROSS-OVER PROBABILITIES Formula$q=0.02$, 0.04, AND 0.07

FID with Formula$p=0.07$ always outperforms the original sum-product decoding at the crossover probability Formula$q=0.02$. At the crossover probability 0.04, the original sum-product decoding shows the best performance among other FIDs and it is almost the same as FID with Formula$p=0.07$. Similarly to the MacKay code case, the difference between FID with Formula$p=0.07$ and SPDec for Formula$l_{\max}=10$ is larger than the one with Formula$l_{\max}=100$ at the crossover probability Formula$q=0.02$.

Fig. 3 depicts the word-error-rate as a function of the maximal iteration number for the DSC code (273, 191) for FID with Formula$p=0.01, 0.07$ at the crossover probability Formula$q=0.01$. This figure indicates that both initializations 0.01 and 0.07 lead to the same WER for a very large number of iterations. On the other hand, a suitable initialization Formula$p=0.07$ converges to the final word-error-rate much faster than the Formula$p=q=0.01$ case.

Figure 3
Fig. 3. Relation between word-error-rate and iteration number for initialization Formula$p=0.01$ and 0.07 and crossover probability Formula$q=0.01$ for the DSC code (273,191).

B. Quantum LDPC Codes

We introduce an application to evaluate the word-error-rate for quantum LDPC codes. It is known that the experiment for quantum CSS codes is implementable on a classical computer [18]. A quantum LDPC code of type CSS is a pair of LDPC codes associated with parity-check matrices Formula$H_{x}$ and Formula$H_{z}$ such that Formula$H_{x}H_{z}^{T}=0$. A quantum codeword of the CSS code is characterized by a complex linear combination of quantum states defined by Formula$\sum_{d\in D^{\perp}}\vert c+d\rangle$, where Formula$D^{\perp}$ is the dual code associated with Formula$H_{z}$ and Formula$c$ is a codeword defined by Formula$H_{x}$.

The experiment is composed of the following processes:

  1. Set a quantum Pauli channel Formula$p(I)+p(X)+p(Z)+p(XZ)=1$.
  2. Randomly, generate a pair of error vectors Formula$e_{x}$, Formula$e_{z}$ related to the Pauli channel.
  3. Calculate syndromes Formula$s_{x}$, Formula$s_{z}$ by Formula$s_{x}=H_{x}e_{x}$, Formula$s_{z}=H_{z}e_{z}$.
  4. Input syndromes Formula$s_{x}$ and Formula$s_{z}$ to a sum-product syndrome decoder and obtain outputs Formula$e_{x}^{o}$, Formula$e_{z}^{o}$.
  5. Decoding succeeds if Formula$e_{x}-e_{x}^{o}\in\langle H_{z}\rangle$ and Formula$e_{z}-e_{z}^{o}\in\langle H_{x}\rangle$. Decoding fails otherwise, where Formula$\langle H\rangle$ is a code space generated by Formula$H$.

From Theorem V.1, we can omit the 3rd step. We can also replace the 4th step with “Input error vector Formula$e_{x}$ and Formula$e_{z}$ to a sum-product decoder and obtain outputs Formula$c_{x}^{o}c_{z}^{o}$. Finally we can replace the 5th step with “Decoding succeeds if Formula$c_{x}^{o}\in\langle H_{z}\rangle$ and Formula$c_{z}^{o}\in\langle H_{x}\rangle$. Decoding fails otherwise.”

Hence, we reduce the computational complexity of the experiment. Note that we can also replace the sum-product decoder with our FID. It becomes applicable to a security evaluation of quantum cryptography, known as a BB84 protocol [19].

SECTION VIII

CONCLUSION

In this paper, we introduced the concept of a correctable error set and a fixed initialization decoding, by noticing that the sum product decoder with a given iteration number only depends on the initialized probability of error, for a BSC. Although this value has been conventionally selected as the BSC crossover probability, we showed that other selections can provide better performance or faster convergence. We also proved that for any fixed initialization Formula$p$ (i.e., any given correctable error set), the word-error-rate can be represented as a polynomial of the BSC crossover probability. This suggests that the word-error-rate can be analytically derived from the (total or partial) knowledge of the correctable error set.

For further research, the following topics seem meaningful from the points of view of theory and practice:

  • While these results have been derived for the BSC, the same concepts can be extended to any discrete input—discrete output channel model, in particular quantized versions of the AWGN channel.
  • Construct a LDPC code such that its length is practically large but its correctable error set can be identified. One approach is to construct a bipartite graph with high symmetry. For example, consider a parity-check matrix Formula$H$ such that the columns consist of all column vectors of Hamming weight 2. Then its length is Formula$M(M-1)/2$, where Formula$M$ is the column size of Formula$H$, e.g., the length is 4950 for Formula$M=100$. Actually the computational complexity to determine the correctable error set is Formula$M$, thanks to the symmetry of Formula$H$. Unfortunately, the LDPC code associated with Formula$H$ does not show good error-correcting performance. However, this example implies that it is not impossible to determine correctable error sets for large length codes.

ACKNOWLEDGMENT

The authors thank the anonymous reviewers, and Mr. W. DeMeo and Mr. J. Kong for their valuable comments and suggestions to improve the quality of the paper.

Footnotes

This work was supported by KAKENHI 22760286. The paper was presented in part at the 2010 IEEE International Symposium on Information Theory (ISIT) and in part at the 32nd Symposium on Information Theory and Its Applications (SITA 2009, in Japanese).

M. Hagiwara is with the National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba City, 305-8568, Japan, and also with the Center for Research and Development Initiative, Chuo University, Tokyo, 112-8551, Japan (e-mail: hagiwara.hagiwara@aist.go.jp).

M.P.C. Fossorier is with the ETIS, ENSEA/UCP/CNRS UMR-80516, Cergy-Pontoise, 95014, France.

H. Imai is with the AIST and Department of Electrical, Electronic and Communication Engineering, Faculty of Science and Engineering, Chuo University, Tokyo, 112-8551, Japan.

Communicated by I. Sason, Associate Editor for Coding Theory.

Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org.

References

No Data Available

Authors

No Photo Available

Manabu Hagiwara

Manabu Hagiwara received the B.E. degree in mathematics from Chiba University in 1997, and the M.E. and Ph.D. degrees in mathematical science from the University of Tokyo in 1999 and 2002, respectively. From 2002 to 2005 he was a postdoctoral fellow at IIS, the University of Tokyo. He also was a researcher at Research Institute for Mathematical Sciences (RIMS), Kyoto University, 2002. Currently, he is a scientist of Research Center for Information Security, National Institute of Advanced Industrial Science and Technology (AIST), is a visiting associate professor of Center for Research and Development Initiative, Chuo University, and is a research scholar at Department of Mathematics, University of Hawaii. His current research interests include industrial mathmetics, coding theory, and algebraic combinatorics.

No Photo Available

Marc P. C. Fossorier

Marc P. C. Fossorier received the B.E. degree from the National Institute of Applied Sciences (I.N.S.A.) Lyon, France in 1987, and the M.S. and Ph.D. degrees in 1991 and 1994, all in electrical engineering. His research interests include decoding techniques for linear codes, communication algorithms and statistics. Dr. Fossorier was a recipient of a 1998 NSF Career Development award and became IEEE Fellow in 2006. He served as Editor for the IEEE Transactions on Information Theory from 2003 to 2006, as Editor for the IEEE Transactions on Communications from 1996 to 2003, as Editor for the IEEE Communications Letters from 1999 to 2007, and as Treasurer of the IEEE Information Theory Society from 1999 to 2003. From 2002 to 2007, he was an elected member of the Board of Governors of the IEEE Information Theory Society which he served as Second and First Vice-President. He was Program Co-Chairman for the 2007 International Symposium on Information Theory (ISIT), the 2000 International Symposium on Information Theory and Its Applications (ISITA) and Editor for the Proceedings of the 2006, 2003 and 1999 Symposium on Applied Algebra, Algebraic Algorithms and Error Correcting Codes (AAECC).

No Photo Available

Hideki Imai

Hideki Imai (LF' 86) received the B.E., M.E., and Ph.D. degrees in electrical engineering from the University of Tokyo, Tokyo, Japan, in 1966, 1968, and 1971, respectively. From 1971 to 1992, he was on the faculty of Yokohama National University. From 1992 to 2006, he was a Professor in the Institute of Industrial Science, University of Tokyo. In 2006, he was appointed to Emeritus Professor of the University of Tokyo and Professor at Chuo University. Concurrently, he serves as the Director of Research Center for Information Security, National Institute of Advanced Industrial Science and Technology as well as a Director of The Institute of Science and Engineering, Chuo University. Dr. Imai received the Best Book Awards from IEICE in 1976 and 1991, the Best Paper Awards in 1992, 2003, 2004, and 2008, the Yonezawa Memorial Paper Award in 1992, the Achievement Award in 1995, the Inose Award in 2003, and the Distinguished Achievement and Contributions Award in 2004. He also received the Golden Jubilee Paper Award from the IEEE Information Theory Society in 1998, the Wilkes Award from the British Computer Society in 2007, and the Official Commendations from the Minster of Internal Affairs and Communications in 2002, from the Minister of Economy, Trade, and Industry in 2002, and from the Chief Cabinet Secretary in 2009. He was awarded an Honorary Doctor degree from Soonchunhyang University in 1999 and Docteur Honoris Causa by Toulon University in 2002. He is also the recipient of the Ericsson Telecommunications Award 2005, the Okawa Prize 2008, and 61st NHK Broadcasting Culture Award. He is a member of the Science Council of Japan, a Fellow of IEICE and IACR in 1992, 2001, and 2007, respectively, an IEEE Life Fellow, and an IEICE Fellow, Honorary Member. He served as the President of the Society of Information Theory and its Applications in 1997, of the IEICE Engineering Sciences Society in 1998, and of the IEEE Information Theory Society in 2004. He is currently the Chair of Cryptography Techniques Research and Evaluation Committee of Japan (CRYPTREC) and of the IEEE Japan Council.

Cited By

No Data Available

Keywords

Corrections

None

Multimedia

No Data Available
This paper appears in:
No Data Available
Issue Date:
No Data Available
On page(s):
No Data Available
ISSN:
None
INSPEC Accession Number:
None
Digital Object Identifier:
None
Date of Current Version:
No Data Available
Date of Original Publication:
No Data Available

Text Size