Iterative Hard-Decision Decoding Algorithms for Binary Reed-Muller Codes

In this paper, novel hard-decision iterative decoding algorithms for binary Reed-Muller (RM) codes are presented. First, two algorithms are devised based on the majority-logic decoding algorithm with reliability measures of the received sequence. The bit-flipping (BF) and the normalized bit-flipping (NBF) decoding algorithms are hard-decision decoding algorithms. According to the updated hard reliability measures, the BF and NBF algorithms flip one bit of the received hard-decision sequence at a time in each iteration. The NBF decoding algorithm performs better than the BF decoding algorithm by normalizing the reliability measures of the information bits. Moreover, the BF and NBF algorithms are modified to flip multiple bits in one iteration to reduce the average number of iterations. The modified decoding algorithms are called the multiple-bits-flipping (MBF) algorithm and the normalized multiple-bits-flipping (NMBF) algorithm, respectively. The proposed algorithms have low computational complexities and can converge rapidly after a small number of iterations. The simulation results show that the proposed hard-decision decoding algorithms outperform the conventional decoding algorithm.


I. INTRODUCTION
The Reed-Muller (RM) code is a class of multiple-errorcorrection codes. In the past, RM codes were used in wireless communications, especially in deep-space communications. RM codes were first discovered by Muller [1] and the conventional decoding scheme of RM code, which is the majority-logic decoding, was proposed by Reed in 1954 [2]. Ever since their discovery, a number of efficient decoding schemes have been constructed. In 1995, Schnabl and Bossert proposed a generation of Reed-Muller codes by multiple concatenations and provided a new decoding procedure using soft-decision information [3]. In 1999, a maximumlikelihood (ML) decoder which uses a distance-preserving map and multiple fast Hadamard transforms (FHTs) was presented by Jones and Wilkinson [4] whereas a maximum a posterior (MAP) decoding algorithm for the first-order RM codes was proposed in [5]. In 2000, a new soft-decision majority-logic decoding algorithm was presented by revising the conventional majority-logic decoding [6]. Besides, the The associate editor coordinating the review of this manuscript and approving it for publication was Mingjun Dai . recursive decoding algorithms were provided in [7]- [11]. In 2006, the recursive list decoding was modified by Dumer and Shabunov to approach the performance of ML decoding [9]. Recently, Ye and Abbe proposed the recursive projection-aggregation (RPA) decoding algorithm with lists to have comparable performance with the recursive list decoding [10]. In the same year, the recursive puncturingaggregation (RXA) algorithm was proposed via replacing the projection by puncturing in RPA [11].
In recent years, the breakthrough of polar codes [12]- [15] has brought attention back to RM codes due to the similarity of these two codes. The performance comparison between RM codes and polar codes was demonstrated in [10], [15]- [17]. It has been shown that the RM codes with RPA decoding [10] can significantly outperform the polar codes under successive-cancellation list (SCL) decoding [18]- [20]. In addition, by exploiting the idea of decoding polar codes, [21], [22] presented permutation-based decoding methods for RM codes.
Although various algorithms for decoding RM codes have been designed, most of them are soft-decision decoding. In this paper, the hard-decision decoding is taken VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ into consideration with low complexity. The conventional majority-logic decoding has low complexity but the performance is not good enough. In contrast, the hard-decision ML decoding algorithm has better performance but the complexity of the ML decoding is extremely large. In this work, we aim for new algorithms outperforming the conventional majority-logic decoding with controllable complexity. This paper presents novel iterative decoding algorithms for Reed-Muller codes by exploiting the idea of bit-flipping [23] and min-sum [24] decoding algorithms for low-density parity-check (LDPC) codes [25]. These proposed algorithms are devised based on the min-sum operation and the reliability measures of the received sequence. The first proposed algorithm is called the bit-flipping (BF) decoding algorithm because we flip one bit of the received hard-decision sequence at a time in each iteration. Through decoding iterations, the hard-decision sequence is updated. * Next, the second decoding algorithm, called the normalized bit-flipping (NBF) decoding algorithm, is proposed. The reliability measures of information bits are normalized depending on the degree to balance the number of votes. Therefore, the NBF decoding algorithm offers better performance compared with the BF decoding algorithm. To reduce the number of iterations and the complexity, a termination process is provided as well. These proposed iterative decoding algorithms can converge very rapidly. For the hard-decision BF and NBF decoding algorithms, we additionally provide a condition to allow flipping multiple bits in each iteration. The advantage of multiple-bits flipping is to reduce the required number of iterations and hence to have less computational complexity. We call them multiplebits-flipping (MBF) decoding and normalized multiple-bitsflipping (NMBF) decoding, respectively.
The numerical experiment results show that the proposed hard-decision decoding algorithms perform better than the conventional majority-logic decoding algorithm over the additive white Gaussian noise (AWGN) channel.
The rest of this paper is organized as follows. Notations and definitions are provided in Section II. Section III introduces the conventional majority-logic decoding for RM codes. In Section IV, the steps and examples for the proposed algorithms are given. Section V shows the simulation results over the AWGN channel. Finally, the concluding remarks are given in Section VI.

II. PRELIMINARY AND NOTATIONS
We denote the binary rth-order Reed-Muller code of length 2 m over GF(2) by RM(r, m). Let the 2 m -tuple vectors with 2 m−i+1 segments for i = 1, 2, . . . , m and x 0 = (11 · · · 1) which is the all-one vector. For vectors a = (a 0 , a 1 , . . . , a n−1 ) and b = (b 0 , b 1 , . . . , b n−1 ), we denote * The BF algorithm was briefly described in the conference paper [26]. ab = (a 0 · b 0 , a 1 · b 1 , . . . , a n−1 · b n−1 ) where ''·'' represents the product and say that the product vector x i 1 x i 2 · · · x i γ where 1 ≤ i 1 < i 2 < · · · < i γ ≤ m has degree γ . Then, RM(r, m) can be generated by the following vectors: . . , up to products of degree r}. (2) Therefore, the generator matrix of RM(r, m) can also be expressed as Assume that u = (u 0 , u 1 , . . . , u K −1 ) is the information vector where K is the number of information bits. Then, the encoded codeword c = (c 0 , . . . , c 2 m −1 ) can be obtained by where g j is the jth row of the generator matrix G. For 1 ≤ j ≤ K − 1, if u j is the coefficient of the product vector we call u j the information bit corresponding to a product vector of degree γ . Also, u 0 is called the information bit corresponding to the all-one vector because it is the coefficient of x 0 . Take RM(2, 3) for example, if the information vector is u = (u 0 , u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ), then the encoded codeword is expressed as where u 0 is the information bit corresponding to all-one vector; u 1 , u 2 , and u 3 are the information bits corresponding to the product vectors of degree 1; u 4 , u 5 , u 6 are the information bits corresponding to the product vectors of degree 2.

III. REVIEW OF THE CONVENTIONAL MAJORITY-LOGIC DECODING
For the hard-decision decoding, the conventional decoding algorithm is the majority-logic decoding algorithm which was first proposed by Reed in [2]. The concept of the conventional majority-logic decoding is simple and the majority-logic decoding has low complexity. However, the performance is limited. In contrast, the hard-decision ML decoding algorithm has better performance but the complexity is extremely large. The comparison of complexities for different hard-decision decoding algorithms is given in Table 1. In our work, the major contribution is to modify the conventional majority-logic decoding with better performance and acceptable complexity. In the following, we will introduce the procedure of the conventional majority-logic decoding. During the decoding process, there are 2 m−γ votes that can be used to estimate each information bit corresponding to a product vector of degree γ . We denote k the index of vote Then, the set E is defined as and the set B is defined as For each q k ∈ B, we can construct the following set: consisting of bit indices of the received sequence. The steps of the conventional majority-logic decoding for RM(r, m) can be described as follows: Initialization: Let γ = r and n = 2 m .
Step 1. For 0 ≤ l < n, make a hard-decision on each received value y l from noisy channel by v l = 0, if y l > 0; 1, otherwise.
Step 2. For any information bit u j which is the coefficient of a product vector of degree γ , compute eachû j,k for u j byû for Step 3. Then, determine the estimation of u j according to the majority voting of 2 m−γ equations from (10). If γ = 0, go to Step 5.
Step 4. Remove the estimated information of u j 's corresponding to vectors of degree γ from the received sequence and modify Step 5. Stop decoding and obtain all decoded information bits u j . In Step 2, it can be found that there are 2 m−γ estimates for one information bit u j and the estimated u j is determined based on these 2 m−γ votes in Step 3. The estimation of u j is the value (∈ {0, 1}) with majority votes. That is why it is called the majority-logic decoding. Then, Step 4 decreases the degree by setting γ = γ − 1 and repeats the decoding process until γ = 0.

IV. PROPOSED ITERATIVE DECODING ALGORITHMS
In this section, we first demonstrate the hard-decision BF and NBF decoding algorithms. Then, two modified decoding algorithms, i.e., MBF and NMBF algorithms, which consider multiple-bits flipping are introduced.

A. BF DECODING ALGORITHM
We follow the same notations given in Section III. Besides, let I max be the maximum number of iterations in the decoding process.
j denotes the reliability measure of the jth information bit u j in the ith decoding iteration. Furthermore, M l represents the set of index j such that G j,l = 1 which is the element in the jth row and lth column of the generator matrix G in (3). Giving an additional condition on γ , M γ l represents the collection of j's such that G j,l = 1 and the jth row vector of G is a product vector of degree γ . Moreover, D l denotes the difference between the estimated codeword bit v (i) l and the reliability measure in the lth position of the received sequence. Based on the notations defined above, the block diagram of the decoding scheme is illustrated in Fig. 1.
In each decoding iteration, it allows flipping only one bit from the updated codeword at a time.
Initialization: Set i = 1 and γ = r. Let I max denote the maximum number of iterations and let y = (y 0 , y 1 , . . . , y n−1 ) be the received sequence over AWGN channel.
for the information bit u j corresponding to a vector of degree γ :û where S k is the set of indices in the received sequence which are related to u j as given in (8).
Step 3. Obtain the reliability measure of the information bit u j by summing all 2 m−γ votes from Step 2: If γ = 0, go to Step5. Step 4. For l does not involve the information of any information bit of degree γ , then φ Set γ = γ − 1 and φ (i) = φ (i) . Go to Step 2.
Step 5. If i = I max , make the following hard-decision: j > 0 and stop decoding. Otherwise, go to Step 6. Step and 0 ≤ l < n. Go to Step 7.
=v l for all l, make the following hard-decision: j > 0 and stop decoding. Otherwise, go to Step 8.
Step 8. For calculate otherwise, D l = 0. Go to Step 9.
Set γ = r and φ (i) = v (i) . Go to Step 2. To reduce the complexity, a termination process is developed in Step 7. The decoding process is terminated when the updated sequences are identical within two consecutive iterations.
Note that Step 2, Step 4, and Step 8 of the BF algorithm in [26] are modified in this manuscript to have simpler expressions.

B. NBF DECODING ALGORITHM
For the BF algorithm, the reliability measure is updated only based on the signum of R (i) j 's in (15). In the NBF algorithm, we want to introduce the magnitudes of reliability measures when φ (i) l 's are updated. In addition, the concept of the min-sum algorithm is exploited and then (13) and (15) are modified with the min-sum operations. Besides, the reliability measures R (i) j 's in (14) have to be normalized to balance the number of votes for different degrees. Therefore, the modified algorithm is called the normalized BF algorithm. By letting µ γ be the normalized factor used for degree γ , Steps 2 to 4 of the BF decoding are modified and described as follows.
Step 2. For 0 ≤ k < 2 m−γ , compute each vote information for the information bit u j corresponding to a vector VOLUME 10, 2022 of degree γ by the min-sum operation: where S k is the set of indices in the received sequence which are related to u j as given in (8).
Step 3. Obtain the reliability measure of the information bit u j by summing all 2 m−γ votes from Step 2: where µ γ is a positive integer. If γ = 0, go to Step 5.
Step 4. For l = 0, 1, . . . , n − 1, if φ (i) l does not involve the information of any information bit of degree γ , then φ which are identical to (13) and (15), respectively. Remark 2: To obtain better performance, µ γ can be adjusted for different RM(r, m). We will discuss its impact on the error performance in Section V.

C. MBF AND NMBF DECODING ALGORITHMS
Except for the notations given in Section IV-A, we define the new parameter D Th which is a pre-defined threshold. If the distances calculated in Step 8 of the BF algorithm are larger than D Th , then the corresponding bits are flipped. Therefore, multiple bits can be flipped at a time in each iteration. The MBF algorithm can be obtained by modifying Step 9 of the BF algorithm.
Initialization: Set i = 1 and γ = r. Assign a pre-defined value to D Th . Let the maximum number of iterations be I max .
Step 9.Find the index l with the maximum value of D l , i.e., l = argmax 0≤l<n D l .
Set i = i + 1. Then, update the estimated codeword by flipping not only the codeword bit with the largest distance but also the codeword bits whose distances are larger than D Th . That is, for 0 Set γ = r and φ (i) = v (i) . Go to Step 2. The major difference between the BF and the MBF algorithms is that the MBF algorithm allows flipping multiple bits in each iteration. Therefore, the required iterations of the MBF algorithm is reduced. In addition, the bit error rate (BER) performance is not affected if an appropriate value of D Th is selected.
Next, by utilizing the concept of normalization in NBF, we can devise the normalized MBF decoding algorithm, i.e., the NMBF decoding algorithm. The NMBF algorithm can perform close to the NBF algorithm with fewer iterations, which will be demonstrated in the next section.

V. SIMULATION RESULTS AND COMPLEXITY ANALYSIS
In this section, we compare the performance and complexity among various decoding algorithms over the AWGN channel. First, we will discuss the influence of the normalization factor µ γ on BER performance. In the NBF decoding algorithm, R (i) j is normalized by µ γ since the number of votes could be very large. Furthermore, in order to balance the number of votes for different degrees, we set µ γ = 2µ γ +1 for γ = 0, 1, . . . , r − 1. For the ease of presentation, we use ''NX '' to denote the normalization factor µ 0 = X . Also, ''IX '' represents that the maximum number of iterations is X . E.g., in Fig. 2, ''N4'' and ''I30'' represent that we use µ 0 = 4 and I max = 30, respectively, for the NBF decoding. Fig. 2 demonstrates the BER performances of the NBF decoding algorithm with different µ 0 for RM (2,7). From Fig. 2, we can observe that the NBF decoding algorithm with µ 0 = 32 (N32) has the best BER performance in RM (2,7).
To evaluate the number of iterations for the BF and NBF decoding algorithms to reach the performance of convergence, we provide the BER performances of RM (2,7) decoded with these two algorithms in Fig. 3 and Fig. 4, respectively. Fig. 3 shows that the performances of RM (2,7) decoded with the BF decoding algorithm converge at 10 iterations. Similarly, the NBF decoding algorithm requires  10 iterations to converge to the best performance according to Fig. 4. Numerical experiments show that a small number of iterations is needed for performance convergence. In the sequel, we set I max = 30 for the BF and NBF decoding algorithms.
Moreover, the BER performances of different hard-decision decoding algorithms are shown in Fig. 5 and Fig. 6. We compare the performances among the proposed BF decoding algorithm, NBF decoding algorithm, and the conventional majority-logic decoding algorithm (It is labeled by the hard majority.) in Figs. 5 and 6. We can see that the BF decoding algorithm outperforms the conventional majority-logic decoding algorithm. At the BER of 10 −5 , the proposed BF decoding algorithm has at least 0.3 dB gain over the conventional majority-logic decoding algorithm. The NBF decoding algorithm performs better than the BF decoding algorithm. As shown in Fig. 6, the NBF algorithm can have 0.5 dB gain compared to the BF algorithm. Moreover,   there are 0.55 dB and 0.81 dB gain in comparison with the conventional majority-logic decoding algorithm for RM(2, 7) and RM (3,8), respectively.
The general complexity comparison for hard-decision algorithms is provided in Table 1. From Table 1, we find * Note that, for the first-order RM codes, the complexity order of the hard-decision maximum likelihood (HDML) algorithm is O(n log n) when FHT decoding is used [27], [28]. Besides, K denotes the code dimension and 2 K is the number of codewords.   that our proposed BF and NBF algorithms have the same complexities of additions. The NBF algorithm outperforms the BF algorithm in BER, but it requires additional multiplications. However, we can choose the normalization factor to be a power of two. Hence, multiplications can be avoided for fixed-point implementation.
The comparison of the average number of iterations for the proposed algorithms is shown in Fig. 7. We set I max = 30. That is, the decoding process will be terminated before or at the 30th iteration. Since the BF and NBF decoding algorithms flip only one bit in each iteration, they require more iterations to successfully decode the erroneous codewords. The MBF and NMBF decoding algorithms require fewer iterations compared with the BF and NBF algorithms, respectively. The number of iterations can be reduced by 40% to 80%. The major complexities of these iterative algorithms are the computations in each iteration. For each iteration, the complexity order of required operations is O r i=0 m i · 2 m−i . If we  can terminate the decoding process with fewer iterations, the overall complexity can be reduced significantly. Fig. 8 shows the block error rate (BLER) performances of the BF, MBF, NBF, and NMBF algorithms. The NBF and NMBF algorithms have the same BLER performances. The MBF algorithm performs close to the BF algorithm. There is only a 0.02 dB difference at BLER of 10 −4 . However, at SNR of 7 dB, the average number of iterations for the BF algorithm is 9.44 while that for the MBF algorithm is only 2.07. The reduction ratio is 78.07% ((9.44 − 2.07)/9.44). Fig. 9 demonstrates the performances of the MBF algorithm with different values of D Th . We observe that if an appropriate threshold D Th is selected, the MBF algorithm can perform close to the BF algorithm.

VI. CONCLUSION
In this paper, we have proposed several hard-decision decoding algorithms for binary RM codes. They are iterative decoding algorithms that have not been proposed in the literature. The performance can be improved by updating the reliability measures and flipping codeword bits. These algorithms do not need large computational complexity since they can converge very rapidly in a small number of iterations. For hard-decision decoding, our proposed NBF decoding algorithm outperforms the conventional majority-logic decoding algorithm with 0.55 dB to 0.8 dB gain. These proposed algorithms have the same complexity orders. Specifically, the BF and MBF decoding algorithms only require integer additions.
Decoding the RM codes with our iterative decoding algorithms converges very fast with a small number of iterations. Moreover, the proposed MBF/NMBF algorithms can decrease the number of iterations such that the overall complexity can be reduced. The average number of iterations can be reduced by 40% to 80% compared to the BF/NBF algorithms.
Possible future work includes the theoretical analysis of the core parameters of the proposed algorithms, e.g., µ 0 and D Th . Furthermore, the generalization of our proposed algorithms to decode non-binary RM codes is suggested as a future research topic.