An Energy-Efficient Compression Algorithm of ECG Signals in Remote Healthcare Monitoring Systems

Remote Healthcare Monitoring Systems (RHMs) that use ECG signals are very effective tools for the early diagnosis of various heart conditions. However, these systems are still confronted with a problem that reduces their efficiency, such as energy consumption in wearable devices because they are battery-powered and have limited storage. This paper presents a novel algorithm for the compression of ECG signals to reduce energy consumption in RHMs. The proposed algorithm uses discrete Krawtchouk moments as a feature extractor to obtain features from the ECG signal. Then the accelerated Ant Lion Optimizer (AALO) selects the optimum features that achieve the best-reconstructed signal. Our proposed algorithm is extensively validated using two benchmark datasets: MIT-BIH arrhythmia and ECG-ID. The proposed algorithm provides the average values of compression ratio (CR), percent root mean square difference (PRD), signal to noise ratio (SNR), Peak Signal to noise ratio (PSNR), and quality score (QS) are 15.56, 0.69, 44.52, 49.04 and 23.92, respectively. The comparison demonstrates the advantages of the proposed compression algorithm on recent algorithms concerning the mentioned performance metrics. It also tested and compared against other existing algorithms concerning Processing Time, compression speed and computational efficiency. The obtained results show that the proposed algorithm extremely outperforms in terms of (Processing Time = 6.89 s), (compression speed = 4640.19 bps) and (computational efficiency = 2.95). The results also indicate that the proposed compression algorithm reduces energy consumption in a wearable device by decreasing the wake-up time by 3600 ms.


I. INTRODUCTION
According to an American Heart Association report, cardiovascular diseases (CVDs) are one of the leading causes of death worldwide [1]. Medical scientists attributed significant priority to cardiac health studies and focused on scientific advancement in cardiac activity measurement. One such research pathway is the development of traditional cardiovascular diagnostic technology implemented in hospitals and clinics. Monitoring the Electrocardiogram (ECG) is the most extensively used clinical heart check. Due to The associate editor coordinating the review of this manuscript and approving it for publication was Shovan Barma . its straightforward, risk-free, and cheap use has become a valuable test for different cardiac disturbances. Electrocardiogram (ECG) is the electrical activity of the heart [2]. It provides complete information about the heartbeat's electrical activity simultaneously for a better diagnosis. Each heartbeat in the ECG signal produces various deflections expressed as wave series. The regular heartbeat consists of five waves depicted by five symbols P, Q, R, S, and T, as illustrated in Fig.1. The P wave denotes atrial depolarization stimulation, and a QRS complex is created by atrial repolarization and ventricular depolarization, while The T wave is generated by ventricular repolarization. Continuous health care monitoring enables proactive medical treatment in CVDs. It helps in the early detection of diseases or before the symptoms intensify. Recently, Health care has seen rapid advancements, particularly in Remote Monitoring Systems.
Meanwhile, doctors are beginning to demand long-term ECG signals and monitoring these signals remotely to help understand the evolution and life cycle of such diseases [3]. In Remote Healthcare Monitoring Systems, Wearable sensors can collect and wirelessly communicate precise physiological data to a smartphone application through Bluetooth. The application receives data and sends it to a cloud server, where it displays several algorithms for detecting and diagnosing diseases shown in Fig. 2 [4], [5].
Remote Monitoring Systems are an extremely powerful tool for transmitting data remotely and performing speedy diagnoses. However, these systems continue to encounter challenges that impair their effectiveness. Energy consumption is one of the issues that Remote Healthcare Monitoring Systems' sensors face. Another issue is the device's low memory and processing capability due to its batterypowered nature. Also, it is limited in terms of storage and computational capacity. Typically, sensors consume energy as a result of data sensing, processing, and transmission. Data transmission is regarded to be the primary source of energy waste, as power is wasted during the sending and receiving of data. As a result, the signal data should be minimized to conserve energy [6]. A wearable sensor may be used inside the patient's body for healthcare monitoring purposes. Since these sensors are battery-driven, it needs a serious medical procedure to replace or recharge [7]. This has motivated us to propose solutions to the energy consumption problem in Remote Healthcare Monitoring Systems' devices. An alternative concept is signal data compression to reduce the signal's size being stored or transmitted to overcome this problem. Besides energy consumption reduction, Compressions data has numerous advantages such as reduced memory storage, reduced processing time, and accelerating data processing. In this work, we focus more on energy consumption reduction because it is an important problem in Remote Healthcare Monitoring Systems sensors. the majority of high-performance ECG signal compression methods are inappropriate for wireless biosensors due to their complexity. Therefore, the compression technique employed in remote health care systems should be extremely efficient, straightforward, and rapid. In general, effective and efficient compression algorithms should achieve high compression ratios and maintaining the visual quality of the compressed data [8]. This is a significant issue in ECG signal compression since the loss of medical data may result in an incorrect diagnosis. Our work incorporates a range of contributions that can be summarized as follows: • A highly efficient, rapid, and simple compression technique is well-suited for wireless biosensors in Remote Monitoring Systems. It achieves a high compression ratio, preserving the reconstructed signal's quality to reduce the energy.
• A modified version of ALO is introduced, which levy flight is applied in Ant Lion Optimizer ALO) to accelerate it by reducing the number of iterations performed by Accelerated Ant Lion Optimizer (AALO) algorithm in the process of searching for the optimum solutions.
• Experiments demonstrated the proposed algorithm's superior performance using two different datasets.
The proposed ECG compression technique combines the discrete krawtchouk moments and Accelerated Ant Lion Optimizer (AALO). In the first phase, the discrete krawtchouk moments are used as a feature descriptor of ECG signal. The optimization algorithm (AALO) selects the optimum features that achieve the best-reconstructed signal in the second phase. AALO selects the best feature combination by minimizing the Mean Square Error (MSE) as the objective function. The remainder of the paper is structured into six sections: section 2 discusses literature review and motivation. Details Krawtchouk moments, Ant Lion Optimizer, and using levy flight in Acceleration ALO algorithm are illustrated in Section 3. We introduce our proposed algorithm in section 4. The numerical experiments, the obtained results, and discussions are shown in section 5. In section 6, we conclude our work in this paper.

II. LITERATURE REVIEW AND MOTIVATION
Due to the importance of ECG in health care monitoring, it was addressed in many research works. In general, there are two main types of ECG data compression techniques: lossless compression techniques and lossy compression techniques. In lossy data compression techniques, the reconstructed signal involves some loss of data. In that way, the lossy compression techniques achieve compression ratios much higher than the lossless compression techniques, but medical regulatory agencies do not accept them. On the contrary, the original signal can be reconstructed from its compressed form in lossless data compression type and cannot reach high compression ratios. However, there is no loss of information in the lossless compression techniques. The reconstructed and original signals are much the same. So these types of compression are more emphasized in ECG signal use [9]. The compression methods for ECG signals are categorized into three groups: direct compression method, transformational compression methods, and parameters extraction based method. The transformed compression method is often favored among these three categories because these techniques have effective performance in CR and signal restoration.
This method converts the signal from the time domain into other domains and rejects irrelevant coefficients; the key idea is energy redistribution [10]. The transformed based methods include Discrete cosine transform (DCT) [11], discrete Fourier transform (DFT) [12], discrete wavelet transform (DWT) [13], [14]. The previous transforms are used extensively in ECG compression due to their simplicity.
Several ECG compression algorithms were developed based on DWT [10], [15]- [18]. Two dimensional (2D) ECG compression algorithms are modified version of transforms such as DCT [19], 2D-DWT [20], singular value decomposition (SVD) [21]. Using encoding in transformed methods achieves a high CR. Various types of coding are used to compress ECG signals, such as Huffman encoding, runlength encoding (RLE), Lempel ZivWelch (LZW) encoding. Rajoub [22] introduced an ECG data compression algorithm based on efficient encoding. Here, the signal is decomposed into frequency bands using WT, and then efficient coding is applied. Pooyan et al. [23] present an efficient compression method using set partitioning in hierarchical trees (SPIHT) coding. SPIHT compression algorithm is low complexity and achieves high CR with high reconstruction efficiency. Sharifahmadian [24] proposed a novel coding technique to improve compression efficiency called enhanced set partitioning in hierarchical trees (ESPIHT). Other data compression methods are used to compress ECG data, such as; a discrete sin interpolation (DSI) method [25], discrete orthogonal Stockwell transforms [26]. Another work of ECG data compression is applied using the modified embedded zero-tree wavelet (MEZW) is proposed by Tohumoglu and Sezgin [27]. They improved the compression algorithm's efficiency by experimenting with different wavelet types and threshold values. In the past few years, ECG compression methods have grown significantly in healthcare systems. Kumar et al. [28] introduced a hybrid approach of Singular Value Decomposition (SVD) and Embedded Zero Tree Wavelet (EZTW) techniques. This method achieves a high compression ratio in the telemedicine system. Elgendi et al. [29] introduced Compression ECG data used in E-health applications. They used Decimating by a Factor B/K and TERMA-based QRS detector. Window-based Turning Angle Detection and adaptive tuning of angle threshold introduced by Zhou and Wang [30] to reduce wireless data transmission rate in wearable health care sensor systems. Chandra et al. [31] presented a new algorithm called Cosine Modulated Filter Bank (CMFBs) to Compress ECG data. The ECG signal is encoded by decomposing it into different frequency bands in that algorithm. The threshold for the elimination of the negligible coefficients is implemented. The threshold value is estimated by analyzing any band's significant energy. Besides, run-length (RLE) encoding is used for compression enhancement. Another work suggested a deep-learning-based spindle convolutional auto-encoder introducing a new algorithm for compressing ECG signals [32]. The spindle convolutional auto-encoder is mainly composed of the convolutional encoder and decoder with functional layers. It has a high CR and ECG compression of outstanding quality. Chagnon and Rebollo-Neira [33] introduced a novel strategy called Mixed Transform in which 1D transformed to 2D conversion of heartbeats, a DWT along with one of the dimensions, and the DCT along the other dimension. Comparative performance analysis of Various Wavelet-Based ECG Compression Methods presented by Chandra et al. [34]. Wavelet transform is used with four coding types: Huffman encoding, Run-Length encoding (RLE), and Lempel ZivWelch (LZW) encoding. Khalid and Boudraa [35] suggested a compression algorithm based on empirical mode decomposition (EMD). For ECG data compression, a combination of EMD and wavelet transform was proposed by Wang et al. [36]. A new ECG compression algorithm was suggested by Jha and Kolekar [37]. This algorithm use sifting function-based empirical mode decomposition EMD to get the first intrinsic mode function (IMF), and discrete wavelet transform (DWT) is applied. Then run-length encoding is implemented to obtain a compressed form of ECG signal. Tsai and Tsai [38] developed an ECG compression algorithm to reduce the storage and transmission cost by a multichannel; it employs adaptive linear prediction to account for intra and inter-channel decorrelation. For entropy coding, they additionally employ the adaptive Golomb-Rice codec. Zheng et al. [39] presented a simple and effective approach for decomposing ECG signals using SVD and then the decompressed data to a convolutional neural network (CNN) and supporting vector machine (SVM) for classification. Jha and Kolekar [40] provide a methodological review of multiple ECG signal compression methods. This research investigates the benefits and drawbacks of various ECG data compression strategies. It also offers various ECG compression methodology validation techniques. Although compression techniques have been widely developed, validation of compression methods remains a promising research field for achieving efficient and dependable performance.
In the present era, orthogonal moments have gained importance because they can effectively represent images and signals in many compressions, processing, and pattern recognition applications [41]- [45]. Discrete orthogonal moments are more efficient than continuous orthogonal moments due to the numerical approximation of continuous integral, the normalization of coordinate space, and computation complexity. Coordinate space transformations are not required in discrete orthogonal moments such as Krawtchouk and Tchebichef moments. Moreover, there is no numerical approximation, as the basis in the discrete domain of the space of the image coordination is orthogonal [46]. Moments based on discrete orthogonal polynomials are commonly used in signal compression. If a discrete orthogonal moment is chosen correctly, the energy in a signal is concentrated on a relatively small percentage of the moment coefficients; these coefficients are then used to generate the reconstructed signal. Hosny et al. [43] propose efficient compression of biomedical signals using Tchebichef moments and Artificial Bee Colony. The Tchebichef moments extract bio-signal features, and the ABC selects the best features that result in the best-reconstructed signal quality. However, there is the main drawback in this algorithm as high time-consuming. This is due to slow convergence and the large number of iterations used in searching for the optimum solution by the ABC algorithm. To overcome this drawback, in this paper, we use Krawtchouk moment as a feature descriptor better than Tchebichef in terms of reconstruction quality and speed Ant lion Optimizer ALO to select the optimum feature achieves optimum reconstructed signal which better than ABC in convergence speed. In addition, we combined ALO with levy flights, which is one type of random walk whose step lengths are not constant but determined by levy distribution. Levy flight makes a large jump in a random walk; this allows the individual to visit new sites that the swarm has not visited, which accelerates the convergence of ALO. Hybrid levy flight with ALO finds optimal features in the lowest number of iterations, reducing time consumption.

A. BLOCK-BASED DISCRETE KRAWTCHOUK MOMENTS
Krawtchouk moments are discrete orthogonal moments obtained from Krawtchouk orthogonal polynomials [47]. The computation of Krawtchouk moments faced numerical fluctuations with higher orders, which led to high reconstruction errors during the compression process. So, we apply a block-based moment computation algorithm to overcome this weakness and retain ideal signal reconstruction. We will use the forward discrete Krawtchouk transform of order (l) to extract the features of the signal and the inverse Krawtchouk transform-based to reconstruct the signal.
For a signal with length L, the forward discrete Krawtchouk transform of order (l) is formulated as follows: The inverse Krawtchouk transform-based signal reconstruction is formulated as: where s (x) indicates to the original signal, and S(x) indicates a reconstructed signal, K l set of moment coefficients of the signal (x), and k l show the discrete Krawtchouk polynomials of order l, which are defined in the recurrence relation as follows: From Eqs. (3) - (5), Krawtchouk Kernel matrix can be formulated as follows (9), as shown at the bottom of the next page.
39132 VOLUME 10, 2022 Depending on the block size l, the kernel size is (l × l). If l = 8 and p = 0.8. The (8×8) Krawtchouk Kernel matrix is Kernel 8×8 , as shown at the bottom of the page.
In general, the block-based forward Krawtchouk transform is calculated using matrix multiplications as: where D is moment coefficients, S is a block of the signal, and K is the Krawtchouk Kernel matrix. The block-based inverse Krawtchouk transforms is calculated as: where R is the reconstructed signal block.

B. THE BASIC ALO
Ant lion algorithm inspired from intelligent behavior of antlion's larvae, where it digs a pit in the cone using its strong jaw. Antlion hides in a cone, waiting for ants or insects to slip on the sand and fall in. The pit's edge is sharp, leading to the fall of the prey easily. When prey falls in a pit, an antlion slides it into the bottom of the pit. Finally, antlion amends the trap to the next hunt. Whenever the antlion is hungrier, it has been observed the trap is bigger, and it has a higher chance of catching ant [48].

1) A MATHEMATICAL FORMULA FOR THE BASIC ALO
Ant and antlion are main elements over search space; ant moves to search food, and antlion waits to hunt. Firstly, model the position of ant in search space in the following matrix: where H ANT is the position matrix for ants over search space, H i,j is the i th ant in j th dimension (variable), n is the number of ants, and m is the number of dimensions (variable) in space. The fitness value for ants can be expressed as: where f (H ANT ) is the vector for the fitness value for all ants and f is the objective function. Besides ant, antlion takes position over search space similar to the ant.
where H ANTLION is the position matrix of each antlion is over search space, HL i,j indicates the position of i th antlion in j th dimension (variable). The fitness value of all antlions can be expressed as: is the vector for the fitness value for all antlions, and f is the objective function.

2) ANTS RANDOM WALK
Since ants in nature move at random while searching for food, this movement is modeled as follows: where CS indicates the cumulative summation of steps at each iteration , t max is the maximum number of iterations, t denotes the current iteration defined as: where rand is a random number between 0 and 1.

3) CONVERGENCE OF ANT TOWARDS ANTLION
When ant is trapped in the pit, it slides towards the antlion, ant getting close to the antlion, so the boundary of search space decreases adaptively. This may be expressed as: where u t is upper bounded at t th iteration, l t is lower bound at t th iteration, I denotes ratio defined as I = 10 w t t max , where w is a constant defined as:(w = 2 when t > 0.1t max , w = 3 when t > 0.5t max , w = 4 when t > 0.75t max , w = 5 when t > 0.9t max and w = 6 when t > 0.95t max ).

4) ANTLIONS' TRAPS AFFECT ANTS
Random walks of each ant are affected by Antlions' traps; either antlion selected using roulette wheel or elite antlion in all iterations. This assumption may be expressed mathematically in simple form as: where u t i indicates upper bound for i th ant in t th iteration, l t i indicates a lower bound for i th ant in iteration t, u t is the upper bound, l t is the lower bound, and H ANTLION j t is position j th antlion position in t th iteration.

5) ENSURE THE OCCURRENCE OF RANDOM WALKS
Since the position of ants changes at every iteration and search space has boundaries, so to guarantee that random walk remains inside the search space, we use the following equation: where R t j indicates random walk of j th dimension at t th iteration, x i j indicates a minimum of random walk R t j for the j th dimension and i th ant, y i j indicates a maximum of random walk R t j for the j th dimension and i th ant, u t j is upper bound in j th dimension at t th iteration, and l t j is lower bound in j th dimension at t th iteration.

6) RANDOM WALKS AROUND ELITE AND SELECTED ANTLION
As explained above, the better antlion, the higher chance of catching ant, the roulette wheel is used to select the fittest antlion. On the other hand, each iteration produces the best antlion. So the movement of ants is affected by elite antlion and a selected antlion using a roulette wheel. Ant walks randomly around H (elite) ANTLION and H (selected) ANTLION as: Algorithm 1 Accelerated Ant Lion Optimizer AALO 1. Generate randomly initial population of ants and antlions Eqs. (12) and (14). 2. Compute the fitness value of ants and antlions. Eqs. (13) and (15). 3. Determine antlion which has the best fitness as elite. 4.While termination criteria are not satisfied 5. For each ant 6. Determine an antlion using Roulette wheel selection. 7. Update l t and u t using Eqs. (18)  where H ANT i t shows i th ant position at t th iteration, R t selected shows the random walk around selected antlion using roulette wheel at t th iteration, R t elite shows the random walk around the elite at t th iteration.

7) CATCHING ANTS
After catching and pulling it into the sand, antlion updates its position with the corresponding ant. This happens when an ant's fitness is more than antlion's fitness, this formulated as: where H ANTLION j t indicates j th antlion position at t th iteration, Where H ANT i t indicates i th ant position at t th iteration.

C. DESCRIPTION ACCELERATION ANT LION OPTIMIZER (AALO) 1) CONCEPT OF LEVY FLIGHT
Levy flight is a random walk in which the step length is not constant, but it is determined from levy distribution. Levy distribution has infinite variance and infinite mean with power low step size. Levy distribution is helpful for stochastic algorithms, and it has a role in exploration and exploitation [49]. Levy flight expressed mathematically as: where x and y are random numbers drawn from the normal distribution. where where H ANT i t shows i th ant position at t th iteration, L t selected shows the levy walk around selected antlion using roulette wheel at t th iteration, L t elite shows levy walk around the elite at t th iteration.

2) ACCELERATION ALO ALGORITHM (AALO)
AALO is the improved form of the original ALO, where a random walk in ALO is replaced by levy flight. The AALO algorithm is represented in a simple form as follow:

IV. THE PROPOSED COMPRESSION ALGORITHM
The proposed algorithm uses the Krawtchouk moments to extract the ECG signal features. Then the AALO algorithm selects the optimum features that result in the best quality for signal reconstruction. ECG signal of size (1×L) is divided into blocks of size (1×l) as shown in Fig. 3a, where Krawtchouk moments for each block are computed using Eq. (10) to obtain coefficients. For the (1×l) block, total l coefficients are obtained where the positions of these coefficients indicate possible solutions to the optimization problem. The objective function which evaluates these solutions is mean square error (MSE), where the coefficients with the minimum MSE are selected. MSE is calculated as follows: where l represents the signal block length, s (x) indicates the original signal block, and S(x) indicates a reconstructed signal block.

VOLUME 10, 2022
That is to say, the optimum solution (positions of coefficients) is one that minimizes the MSE (highest fitness value) between the original and reconstructed signal block. This process is shown in Fig. 3b. The required number of coefficients (RNC) is determined based on the desired compression ratio (CR) for signal reconstruction. This RNC is determined from among each block's optimal coefficients. The optimum coefficients are used to reconstruct each block, as shown in Figure 3c. The following equation calculates RNC: After selecting optimum coefficients for each block, concatenate it to obtain the compressed signal, as illustrated in Fig. 3d. The compressed signal is divided into blocks according to the desired block size concerning the decompression phase. Then inverse transform is applied to each block using Eq. (11) to obtain the reconstructed signal. The selection process of the optimum coefficients using AALO algorithm is as follows: 1) A random population of solutions is generated, including random positions of coefficients (the length of each population is equal to RNC and contains a random value from 1to l if block size = l). 2) Evaluate the solutions according to the objective function (MSE), and then the phases of AALO algorithm are applied until stopping criteria are met to get the best solution. 3) Select the optimum coefficients whose positions are the value in the best solution. The Flowchart diagram of the compression and decompression processes of the proposed algorithm is illustrated in Fig. 4.
The proposed algorithm can be summarized (from steps 1 to 7 shows compression processes, and steps 8 to 11 illustrate decompression processes) in Algorithm 2.

V. EXPERIMENTAL RESULTS
All algorithms were tested using Matlab Software (version R2014a) on Microsoft Windows 7, 32-bit Edition, Intel Core i3 processor, 4 GB RAM machine.
Performance evaluation of the proposed compression algorithm has been done by two distinct benchmark datasets, which contain ECG signals are acquired from the records of several people of various ages and under various conditions. Set the search agent = 3 and maximum iteration = 10 in the AALO algorithm. . This dataset was gathered to assist research on using the electrocardiogram (ECG) for biometric identification. Signals in this dataset were obtained at a sampling frequency of 500 Hz (500 samples per second) with 12-bit resolution. Each recording includes ten annotated beats (unaudited R-and T-wave peaks annotations from an automated detector) and information containing age, gender, and recording date.

B. PERFORMANCE EVALUATION METRICS
The proposed compression algorithm's performance and efficiency evaluation are estimated by the following performance criteria, which are described in [37] and [52] • Compression ratio (CR) CR = size of the original signal in bit size of the compressed signal in bit (30) • Percent root mean square difference (PRD) PRD measures the reconstructed signal quality compared to the original by computing the degree of distortion between the original and reconstructed signals. It is represented as: where s (x) is the original signal, and S(x) is the reconstructed signal.
• Quality score (QS) QS is used to evaluate the compression method's overall performance. High QS indicates good compression performance, which is presented as follows: • Signal to noise ratio (SNR) SNR measures the amount of noise energy added into a signal due to compression and decompression procedures. It is presented as follows: wheres (x) is the mean value of s (x) . VOLUME 10, 2022 • Peak signal to noise ratio (PSNR) PSNR is the greatest possible signal power ratio to the corrupting noise power. It is formulated as follows: • Compression Speed The term ''Compression Speed'' refers to the amount of uncompressed data that may be compressed in a single second. Equations (35) and (36)      compression algorithms [23], [31], [34], [37], [39], [53], and [54] is made in terms of CR, PRD, and QS as illustrated in Table 2. Fig. 8 displays a comparison of the proposed algorithm's performance with other existing compression algorithms [43], [28], [55], and [56] concerning the CR-PRD results using MIT-BIH arrhythmia dataset. Results in Table 2 and Fig. 8 show the superiority of the proposed algorithm over the other algorithms, which it has very high CR and the lowest PRD values, Hence the highest QS. As a result, the proposed algorithm provides the highest reconstructed signal quality compared to the other existing methods. Table 3 present the compression performance of the proposed algorithm using ECG-ID dataset. Tables 3 illustrate the resulting CR and PRD, SNR, PSNR, and QS performance metrics for 10 records from ECG-ID dataset. The compression is done at different CRs (5, 6.66, and10), which achieve the best results in PRD, SNR, and PSNR and excellent reconstruction VOLUME 10, 2022    Table 3 Average performance of the proposed algorithm in terms of CR, PRD, SNR, PSNR, and QS are 7. 16, 0.43, 47.83, 55.65, and 17.98, respectively. The proposed algorithm's performance is compared to other algorithms in the literature in terms of CR, PRD, and QS, as shown in Table 4.

quality. As shown in
As reported in Table 4, for Person 02/rec1 signal DEZW algorithm [56] has the highest performance in terms of CR = 21.78 but does not provide the best value of PRD = 9.24.
Hence the information of diagnostic can be lost. On the contrary, the proposed algorithm has a low value in CR = 4 but provides very good PRD = 0.4462. This ensures that no loss in diagnostic information when the signal is reconstructed. Concerning Person 03/rec1 signal at the same CR = 5, the proposed algorithm outperforms in value of PRD = 0.3423. Fig. 9 and 10display the compression of Person 01/rec1 and Person 02/rec1 signals by the proposed algorithm.

E. COMPRESSION SPEED AND COMPUTATIONAL EFFICIENCY
The processing time, Compression speed, and computational efficiency for different signals from the two datasets used in this paper applied on the proposed algorithm are reported in Table 5. Processing time and compression speed associated with the proposed algorithm are equally important as their compression performance.
The algorithm's speed in compression is vital in being appropriate for wireless biosensors in Remote Monitoring Systems. The less processing time, the higher the Compression speed and computational efficiency, as shown in Table 5.  In order to verify the outperform of the proposed algorithm in the processing time, Compression speed, and computational efficiency, a comparison with Tchebichef+ABC [43] is made and illustrated in Table 6, Fig. 11,12. As shown in Table 6, Figs. 11, 12, the superiority of the proposed algorithm in processing time, and thus compression speed, and computational efficiency. That superiority is due to the accelerated Ant Lion Optimizer (AALO) algorithm using levy flight; this acceleration reduced the number of iterations performed by AALO algorithm in the process of searching for the optimum solutions.

F. ENERGY CONSUMPTION EVALUATION
In this part, we introduce a method to evaluate energy consumption. Before introducing the method, we state some notes.
-When processing or transmitting data, wearable devices are in a 'wake-up' state; when not, they are in a 'sleep' state.
-When a wearable device is in a 'wake-up' mode, it consumes its battery for the duration of that period.
-The proposed algorithm reduces battery consumption by decreasing the transmitting data size because the transmitting data time is reduced.
-Because the algorithm increases the processing time, which leads to increased energy consumption.
-So, there is some trade-off between transmission time and processing time. -Let X is the time the original signal needed to transmit, and Y is the time-compressed signal required to be compressed and transmitted.
-Since Bluetooth typically sends one packet per second [57], the database used here is 360 HZ (360sample per second); for example, signal MIT-BIH Rec. 100 is ten packets (10 * 360 = 3600 samples), so it needs ten seconds (10,000 ms) to transmit.
-Using the proposed algorithm, if the signal is compressed with CR = 10, it will be 360sample (1 packet) and need 1 second to transmit, but the compression time is 5400 ms.
The previous table shows that the wearable device takes 10,000 ms seconds to send the signal without compression. Still, in using the proposed algorithm, the device takes 5400 ms seconds to compress the signal and 1 second (1000 ms) to send it, so the wearable device takes(1000 ms+5400 ms) 6400 ms to compress and send the signal. The meaning of it The overall work time decreases by about 3600ms seconds, thereby reducing the battery consumption of the wearable device.

VI. DISCUSSION
The comparative experiments and the above results show that the proposed compression algorithm is superior to other existing methods. The proposed algorithm is numerically evaluated on two widely used datasets. The experiments included many performance measures. Although the proposed method did not achieve the highest CR, he did obtain the best PRD, SNR, PSNR, and QS. This ensures no loss in diagnostic information when the signal is reconstructed; it is critical when compressing medical signals since any loss of diagnostic data results in an incorrect diagnosis. In addition, the proposed algorithm's superiority in terms of processing time and thus compression speed and computational efficiency make it more effective in reducing energy consumption. The superiority of the proposed algorithm in compression ECG signals is due to the following worthwhile reasons: 1. Krawtchouk moments are orthogonal moments with orthogonal basis functions. Thus, each krawtchouk moment coefficient can capture distinct and unique parts of the signal with no information redundancy.
2. Krawtchouk moments' basis functions can extract various distinct types of information from the signal depending on the order value.
3. Moments based on discrete orthogonal polynomials are good in signal compression. It is because they exhibit better energy compaction for common signals. If a discrete orthogonal moment is properly chosen, the energy in the signal will be concentrated on a relatively small percentage of the moment coefficients; these coefficients are stored and then later used for generating the reconstructed signal.
4. The ability of Krawtchouk moments on local and global feature extraction.
5. The superior compression speed and computational efficiency result from the accelerated Ant Lion Optimizer (AALO) algorithm using levy flight; this acceleration lowered the number of iterations conducted by the AALO algorithm when searching for the optimal solution.
While several compression techniques have been developed, considerable limitations and challenges remain. In designing compression methods, computational complexity and accessible memory management play a critical role. The majority of existing techniques are computational complexity, especially in real-time applications such as Remote Monitoring Systems. Memory management becomes more difficult with the use of compression algorithms. When the memory required to conduct the compression technique exceeds the available device memory, efficient compression cannot be accomplished. Even though some compression techniques achieve higher CR, they do not efficiently manage usable memory. As a result, memory management in compression algorithms is another interesting study area.

VII. CONCLUSION
The present article proposes an efficient ECG signal compression algorithm using discrete krawtchouk moments and AALO. The numerous experiments of the proposed algorithm are applied to two commonly used datasets; the MIT-BIH arrhythmia dataset and the ECG-ID dataset. The proposed algorithm's performance was evaluated in three directions. The first direction is compression ratio and reconstruction quality. The second is compression speed and computational efficiency. The last direction is energy consumption. The comparative experiments indicate the proposed algorithm's advantage compared to other compression algorithms in all directions. This performance improvement in the proposed algorithm is due to orthogonal Krawtchouk moments to extract the features from ECG signals. In addition, using the AALO algorithm successfully selects the best krawtchouk moments based on the MSE as an objective function in the proposed algorithm. The selection of the optimum krawtchouk moments using AALO gives improved highquality reconstructed ECG signals in a reasonable time. This is because AALO finds the optimal feature in less number of iterations, decreasing the processing time and, consequently, energy consumption. From the research that has been performed, it is possible to conclude that the proposed algorithm achieves a high compression ratio, preserving the reconstructed signal's quality and reducing energy consumption. Thus, it is suitable for wearable sensors and processing long-term recordings and huge databases and Remote Health Monitoring Systems. In our future research, the ability of the proposed algorithm in compression could be increased by using new Discrete Orthogonal Moments as a feature extractor to obtain features from the ECG signal. Furthermore, using different feature selection algorithms to select the optimum features that achieve the best-reconstructed signal can be a highly effective strategy for optimizing the algorithm's ability in compression. ISLAM S. FATHI received the B.Sc. and M.Sc. degrees in math and computer sciences from the Faculty of Science, Zagazig University, Egypt, in 2013 and 2019, respectively. He is currently working as a Teaching Assistant with the Department of Information Systems, Al-Alson Academy, Cairo, Egypt. His research interests include signal processing, metaheuristic optimization, and bioinformatics.
MOHAMED ABD ALLAH MAKHLOUF received the bachelor's degree in computer science and operation research from the Faculty of Science, the master's degree in expert systems from the Faculty of Science, Cairo University, the Ph.D. degree in computer science from the Faculty of Science, Zagazig University, and the Ph.D. degree in computer science from Granada University, Spain, in 2016. He is currently an Associate Professor with the Faculty of Computer Science and Informatics, Suez Canal University, and the Faculty of Computer Science, Nahda University. His research interests include machine learning, data mining, intelligent bioinformatics, metaheuristic optimization, decision support systems and predictive models, bioinformatics, metaheuristic optimization, decision support systems, and predictive models.
ELSAEED OSMAN is currently a Professor with the Department of electrical engineering, Faculty of Engineering, Al-Azhar University. He has served as the Dean of the Faculty of Engineering, Al-Azhar University.
MOHAMED ALI AHMED (Member, IEEE) is currently an Assistant Professor with the Mathematics Department, Faculty of Science, Suez Canal University. His research interests include computer networks, performance evaluation, queuing systems, metaheuristic optimization, and bioinformatics.