LDPC Decoding Techniques for Free-Space Optical Communications

There has been significant interest in free-space optical (FSO) communication by the research community in recent years. This is due to its high data rate, unlicensed spectrum, low cost, and immense security for FSO systems. Due to these advantages, FSO can have broader applications that extend from terrestrial to satellite communication. Atmospheric turbulence (AT) induced fading is a primary problem in the FSO link since it significantly impairs its performance. Atmospheric turbulence occurs due to the random variation of the air refractive index with time. Several statistical models are introduced to characterize the AT. The Log-normal (LN) model represents weak and moderate turbulence, and the Gamma-Gamma (G-G) model is employed for strong turbulence. These models are used with the effect of weather attenuation, geometric losses, and misalignment errors. One possible solution is channel coding, such as low-density parity-check (LDPC) codes. This paper proposed employing Weighted Bit Flipping (WBF) and Implementation Efficient Reliability Ratio Weighted Bit Flipping (IERRWBF) decoding techniques to improve FSO link performance. The results show a superior improvement in the bit error rate (BER) than the uncoded FSO system. In addition, the obtained results prove that the IERRWBF technique is more optimized than WBF from the point of the number of iterations, especially in weak and moderate turbulence FSO channels. In the calculation of decoders processing time, the WBF maintained lower decoding time than the IERRWBF technique, while in higher $E_{b}/N_{o}\text{s}$ , they have the same level. The same response for both techniques in the case of resultant throughput. Finally, both methods are evaluated from the point of convergence. IERRWBF technique achieved faster convergence than WBF in all FSO channels under study.


I. INTRODUCTION
In recent years, there has been a tremendous interest in freespace optical (FSO) communication. Interest is due to its extended bandwidth compared to radio frequency (RF) and its simple deployment compared to optical fiber cables. The transmission through the atmosphere using optical carriers refers to FSO, i.e., infrared (IR), visible and ultraviolet (UV) bands. It offers significant technical and operational advantages, such as high data rate, license-free bands, and low power. Therefore, it is used for a variety of applications ranging from terrestrial to satellite communication.
The associate editor coordinating the review of this manuscript and approving it for publication was Qunbi Zhuge .
Despite the attractions of the FSO systems, their channel where the light transmits is not ideal. The main challenges engineers should consider when making a reliable FSO link are weather attenuation, atmospheric turbulence, and geometric losses. Atmospheric turbulence causes intensity fluctuations, known as scintillation, due to the variation of the air refractive index. Several models have been proposed to study the turbulance effect on FSO link. The best fit for weak and the moderate turbulence is the log-normal (LN) channel, while Gamma-Gamma is the best fit for strong turbulence. Conditions of weather like dust, fog, rain, haze, and snow are the main parameters causing the fading of atmospheric turbulence and attenuation, which significantly affected the FSO system's performance [1], [2].
LDPC, invented by Gallager in 1962, is an exceptional channel coding technique and shown to approach the Shannon limit [16]. Due to its powerful performance and low decoding complexity, it has gained great interest and proved to be competing with turbo codes [17]. LDPC has been broadly approved by several standards of digital communications as codes for error correction, as in DVB-S2 [18], DVB-T2 [19], IEEE 802.16 [20], IEEE 802.11 [21], and recently in Wireless Body Area Networks as published in [22] and [23].
LDPC decoding techniques are characterized by iterative processing maintaining a precalculated iterations number to either reach syndrome vector, where all its elements are zero (no occurrence of the error), or a more improved codeword, rather than the one sent by the transmitter with the least number of errors. Decoding of LDPC codes consists of three categories: hard, soft, and hybrid decisions [16]. Decoding techniques using hard decisions rely on hard (binary) data of received code words in detecting and correcting errors. In contrast, in techniques using soft decision, it depends on soft data values (raw values) received by the channel to perform correction or find errors. Thus, on the one hand, decoding techniques using hard decisions are identified by having the minimum complexity and performance. On the other hand, soft-decision ones are identified by a remarkable performance with immense complexity. Decoding techniques that maintained hybrid decisions are presented to conciliate between BER performance and complexity. The decoding techniques that maintained a hard decision scheme are Weighted Bit Flipping (WBF) [24], Modified Weighted Bit Flipping (MWBF) [25], Reliability Ratio Weighted Bit Flipping (RRWBF) [25] and Implementation Efficient Reliability Ratio Weighted Bit Flipping (IERRWBF) [26].
In [27], the authors compare BCH codes versus LDPC codes on the FSO turbulence channel only in a slow fading channel condition. Furthermore, they maintained the Belief Propagation (BP) technique, which is a soft decision technique. Also, in [28], a dynamically adjusted LLR for the BP technique is proposed for enhancing FSO turbulence channels. Finally, in [29], authors maintained a low-density parity- check (LDPC) code to mitigate the effects of fluctuations caused by atmospheric turbulence in an uncorrelated flat free-space optical (FSO) channel. To achieve the threshold close to the FSO channel's capacity, we require an optimum degree distribution for the LDPC codes. The technique used in this paper is the BP algorithm. Therefore, all previous attempts maintained the BP technique characterized by high performance and complexity.
This paper proposed employing hard decision LDPC decoding techniques using Weighted Bit Flipping (WBF) and Implementation Efficient Reliability Ratio Weighted Bit Flipping (IERRWBF) decoders for the FSO communication system. Those techniques are characterized by good BER performance and low complexity. However, to our knowledge, no published works exist using such decoding techniques to enhance FSO communication channels. In addition, weather effects, such as misalignment errors, geometric losses, and attenuation, are studied for log-normal and gamma-gamma channels. Results show superior improvement on the FSO communication system compared with the uncoded FSO communication system.
The remainder of this paper is arranged as follows: In section II, the proposed system model is introduced. In Section III, techniques of LDPC encoding and decoding are presented. Section IV shows simulation results. Finally, in Section V, the paper is concluded. Fig. 1 illustrates the proposed FSO system where the signal is coded using LDPC and modulated using OOK. The LDPC codes are outstanding error control techniques. Recently it is adopted by many communications standards and proves its superiority over Turbo codes.

II. PROPOSED SYSTEM MODEL
The signal is affected by turbulence and path losses due to weather attenuation. The mathematical representation of received signal r(t) is as the electrical signal transmitted is y(t), the detector responsively is η and the received signal intensity is I given by [1], [30], where the intensity of the received signal without the channel effect is I o , the irradiance of the channel between the transmitter into the receiver is h, and the coefficient of path loss is β. The additive white Gaussian noise (AWGN)is n(t) in (3), with variance σ 2 n = N o /2 and mean equal to zero, mainly VOLUME 9, 2021 resulting from the noise background [31]. Then maximum likelihood (ML) estimates the received signal,x(t); then the LDPC decoder returns the original signal. Many channels mathematical models have been proposed to identify turbulence from strong-to-weak regimes. These models have probability density functions (pdf) and are illustrated below.
For a weak atmospheric channel, the channel coefficient h follows the log-normal distribution and its PDF given by [32] where the channel coefficient is h = exp (2X ) including X as identically distributed (i.i.d.) and independent random variable (RV) characterized by Gaussian distribution has standard deviation σ , variance σ 2 and mean µ. To assure that no attenuation or amplification for the average power due to fading channel, normalization for coefficients of fading is presented as E[h 2 ] = e 2(µ+σ 2 ) = 1. It is used to characterize the weak turbulence condition model.

2) GAMMA-GAMMA CHANNEL
The channel coefficient h of the moderate-to-strong channel is defined as: where the Meijer's G-function [33, Eq.(9.301)], 1 is G m,n p,q [.], the Gamma function [33, Eq. (8.310)], is (.), β and α are the effective number of small and large eddies, respectively and are both related to each other due to (σ 2 l ) representing the Rytov variance.

III. LDPC ENCODING AND DECODING TECHNIQUES
LDPC Codes are described by their sparse parity check matrices. So, an efficient encoding process will be used rather than converting the parity check matrix into its generator matrix, in which H matrix sparseness will be violated, resulting in excessive complexity [34]. The procedure of encoding used throughout simulations is inspired by [34]. The constructing method of an encoder performed using methods of Gaussian elimination produces an exact shape of the lower triangular form as presented in Fig. 2. The vector x will be divided into a systematic part s and a parity part p, resulting in x = [s, p]. The construction of an encoder is processed as follows: i) Stuffing s with (N −M ) the desired data symbols. ii) Calculate parity check symbols m maintaining back-substitution.
The encoding technique processed by transforming the matrix H into the desired shape requires preprocessing of O(n 3 ) operations. O(n 2 ) operations are required for the encoding at this point. Therefore, due to preprocessing, 1 The Meijer-G function is a standard built-in function available in most popular mathematical software packages, such as Maple and Mathematica. the matrix will lose its sparseness. We predict that we need about n 2 = r(1−r) 2 XOR operations to achieve this encoding, where ''r'' is the rate of the code. It is a suitable encoding for these codes. Exceptionally, the encoding technique presented in [34] requires quadratic encoding, as the constant components preceding the n 2 term are routinely negligible. The complexity of encoding will still be acceptable using blocks with extensive lengths.
The maintained encoder of LDPC is shown in Fig. 2. Assuming the processing of permutations is only on columns and rows, it can convert the matrix of parity-check to the form presented in Fig. 2 so that it is in proximate lower triangular form.
Furthermore, these matrices are mainly characterized by sparseness, besides lower triangular contains ones onward matrix diagonal. Therefore, the following matrix will be multiplied to this matrix from the left by results in Eq. (6).
where 's' denotes the systematic part in x = (s, p 1 , p 2 ), combination of p 1 and p 2 is the parity part, length of p 1 is (g), and length of p 2 is (m − −g). The Hx T = 0 equation is naturally divided into two equations, as and Accordingly, it will produce a systematic form of code word as c = (s, p 1 , p 2 ).
The process of decoding initiated by practicing parity check matrix H with size M × N results in syndrome vector bits with length 1 × M that can be presented as [16]: where z represents hard (binary) values computed from soft vector y which is the vector received from the communication channel. The syndrome is maintained to determine whether the codeword received is successfully decoded or requires more processing to be completely corrected. LDPC decoding relies on the parity check matrix H completely with entries h(m, n). The BER performance is influenced by the size of H plus the decoding techniques complications [35]. As the parity check matrix size concerning a certain code is broadened to greater sizes, it leads to BER performance enhancement reaching the Shannon limit [35]. Furthermore, the decoder complexity increments quadratically [35] and escalation of decoding time occurs.
Tanner graph or bipartite is the best visualization for the LDPC codes, which are presented in [36]. The Tanner graph's two types of nodes are variable and check nodes as they present code word and message bits. In Fig. 3 an illustration of this graph for a certain code is shown. Fig. 3 shows Tanner

A. HARD DECISION DECODING TECHNIQUES
A hard decision technique is proposed in [16] named Bit-Flipping (BF) technique. Low hardware complexity is the main characterization of the BF technique; however, it has significant degradation in performance. Hard decision iteration has a complexity of O(M ρ + N γ ), as presented in [37], where the number of ones per row in H is ρ and the number of ones per column in H is γ . For enhancement of the BF property, several versions have been proposed as following:

1) WEIGHTED BIT FLIPPING (WBF)
It is presented in [24] that the aim to develop the BF technique is to achieve enhanced performance for error-correcting capability by inserting some information reliability to the received symbols in its decoding decisions. Extra decoding complexity will be added which is unavoidable for this enhancement of performance.
The procedure of decoding is initiated by locating unreliable variable nodes that have the highest soft value participating in every individual check node. That is indicated by: | y n min |= {min | y n |: n ∈ N (m)} (12) as n min represents the variable node having the lowest soft value and | y n | is the absolute value of y n received by the variable node y n . Let z n be the binary equivalent for soft value y n . As | y n | is formidable, the reliability of the hard-decision digit z n is marked-up. For each variable node, The error term E n can be calculated as: as s m is the signal component of the syndrome connected to m th check node. The E n is the weighted checksum value connected to n, which is the code bit location.

2) MODIFIED WEIGHTED BIT FLIPPING (MWBF)
The evaluation of the error term specified in (13) depends only on the information of the check-node. For enhancement of performance concerning decoding, the MWBF technique [25] proposed by Zhang et al., were delivering information by the variable node, is also taken into account. The MWBF technique's main difference from the WBF technique is the second step in the procedure of iterative decoding. For MWBF is calculated as [25]: where the weighting factor ψ is precalculated. When comparing (14) to (13), it can be distinguished that there is a supplementary term in (14), as the information of the variable node is taken into consideration. The principle of the MWBF technique is that two variable nodes are taking into account the identical error term executed in (13). Consequently, the probability of being flipped will be equal for the two-variable nodes. Also, if the values | y n | for these two variable nodes are changed, one with the lowest magnitude should be inverted as it is most probably unreliable. So, by involving (14) and adding the additional term of multiplicative ψ· | y n min | in the presentation of the error term, a more improved decision could be performed. The performance of the MWBF technique lies completely on ψ, which ψ has to be precalculated by the off-line computing step. VOLUME 9, 2021

3) RELIABILITY RATIO WEIGHTED BIT FLIPPING (RRWBF)
There is an enhancement in the performance of the error correction for the conventional WBF technique using the MWBF technique; however, it reduces the needed operations to calculate the error term. In [25], the term ψ maintained in (14) should be carefully picked. Both WBF and MWBF techniques have a disadvantage as they deal with the contravention of a check node as unreliable. In contrast, these check node variable nodes are causing contravention of this check node. The check node is violated when all variable nodes connected to it may be causing this condition. As the exact violated parity check connected to two different variable nodes, causing a violation of check node with a probability that relies on a high soft-magnitude variable node is lower than the slightest soft magnitude one. Therefore, in [26] a new quantity called Reliability Ratio (RR) is proposed to resolve this complication and is illustrated as: The normalization factor ϒ is used to confirm that n∈N (m) R mn = 1. For locating the highest soft magnitude of all variable nodes connected to check node m: where n max represents the location of the variable node having the largest soft value. So, instead of determining the error term E n as in (14) maintaning y n min , [26], the authors proposed replacing the formula of MWBF with the following formula: steps of RRWBF are exactly as WBF technique excluding E n in step 2 is determined by (17).

4) IMPLEMENTATION EFFICIENT RELIABILITY RATIO WEIGHTED BIT FLIPPING (IERRWBF)
The reliability ratio-based bit flipping (RRWBF) technique proposed in [26] improves all techniques based on bit flipping. A new term to decrease the decoding processing time is proposed by authors of [38] in case of a lower iteration number. The reliability ratio term 1/R mn substituted by T m as follows: and the error term is calculated by: The rest of the technique will be exactly as of the RRWBF technique except the determinations of E n by (19).

IV. SIMULATION RESULTS
All simulations of this work are maintained using Matlab R2018a with computer capabilities: processor Intel(R) Core(TM) i7-7500U CPU @ 2.90 GHz and 8.00 GB RAM. In all conducted analyses in this section, a target BER of 10 −6 is assumed, λ = 1550 nm,, L = 1000 m, α = 0.43 dB/km for clear weather. In simulation results, 10 7 bits are transmitted for each E b /No value. Table 1 shows the parameters used in the simulation results.  The parameters illustrated in Table 1 for the case of weak turbulence FSO channel are extracted from [39]. The moderate and strong turbulence channel parameters are extracted from [40] and [41] resipectivly. The used parameters for the system configuration is based on [42] and [39]. The LDPC encoding parameters are determined according to [16] while decoding parameters and techniques are obtained from [25], [26] and [38]. Also, the uncoded FSO system is employed as a benchmark for our results.
In Fig. 4, a clear weather condition with a weak turbulence channel is considered. The results show that the target BER can be achieved using FEC coding with lower E b /No for WBF and IERRWBF decoding techniques by 5 dB and 7 dB, respectively. In addition, the BER of the BP technique is better than the proposed techniques but still considered very high   complex. They are proving the imposed BER contribution due to adding LDPC decoders to the FSO communication channel.        IERRWBF technique achieved lower E b /No than WBF by 5 dB compared with 2 dB in weak turbulence channel. The strong turbulence channel effect is shown in Fig. 6, and    Figs. 13-15 show the throughput of LDPC regular PEG(1008,504) code for different turbulence optical channels. IERRWBF has the same throughput of WBF for low and high E b /No and better throughput than WBF for moderate E b /No. In comparison, WBF has a higher throughput for any E b /No over a moderate turbulence channel. Although IER-RWBF is power-efficient compared with WBF code, WBF has a higher throughput than IERRWBF for low and moderate E b /No over a strong turbulence channel.
Figs. 16-18 presents the convergence of the LDPC decoders used in this work. The convergence of decoding technique is vital as the decoder converges faster it means a faster process to get more corrected data. Also, it measures the efficiency of decoders to how much they converge to the right path for obtaining successful decoding. As shown in Figs. 16-18 the IERRWBF requires fewer iterations to converge for all the turbulence channel regimes, meaning that the receiver's speed utilizing IERRWBF is faster than WBF IERRWBF; converged to lower BER rather than WBF technique.

V. CONCLUSION
The proposed FSO communication system's performance metrics are calculated based on Weighted Bit Flipping (WBF) and Implementation Efficient Reliability Ratio Weighted Bit Flipping (IERRWBF) decoding techniques. LDPC hard decision techniques are evaluated under the combined effect of atmospheric turbulence, path loss effect, and pointing error effect. The following parametric measuring tools are considered: bit error rate, decoding time, number of consumed iterations, decoding time, decoders convergence, and throughput. Results show that the IERRWBF code achieves better BER than WBF for weak, moderate, and strong regimes. Also, results show that the E b /No gain of the LDPC codes increases with the turbulence severity. Moreover, results show that IERRWBF code requires more decoding time than WBF code, and WBF code has more throughput than IERRWBF code. VOLUME 9, 2021