A New Estimation Algorithm for Frequency and Amplitude in Harmonic Signal Processing

Low-rank matrix recovery is a large-scale data analysis and processing technology; its related theory has been widely used in image restoration, image denoising, video background modeling, signal recovery and other fields. This paper proposes an improved inexact augmented Lagrange multiplier (IALM) method to solve the harmonic recovery (HR) problem. After that, the performance of the original IALM and the improved IALM are compared when the sparse matrix is a diagonal sparse matrix satisfying certain conditions, and the results show that the improved IALM algorithm is more stable than the original algorithm. Then the improved strategy of the algorithm is extended to two kinds of occasions in which the positions of non-zero elements in sparse matrix are fixed or random, which provides a way to improve the algorithm in different application scenarios. Finally, the original IALM algorithm and the improved IALM algorithm are used to solve the HR problem, and the experimental results show that the improved IALM algorithm has better solution performance.


I. INTRODUCTION
With the rapid development of information technology, the effective use of large-scale data or high-dimensional data has become more and more important for real life and scientific research. Large-scale data analysis and processing technology has been widely used in signal processing, computer vision, image processing, video analysis, face recognition and other fields. For example, how to recommend movies that users may be interested in according to the types of movies they like to watch [1]; how to compress and store large-scale data and how to restore the compressed data are all inseparable from the analysis and processing technology of large-scale data. Therefore, the research of large-scale data analysis and processing technology is of great significance to effectively process information.
Low-rank matrix recovery is a large-scale data analysis and processing technology, which has been widely used in image denoising [2], visual analysis [3] and other fields. It mainly The associate editor coordinating the review of this manuscript and approving it for publication was Donghyun Kim . describes a class of problems of recovering original low-rank matrix from observation matrix (which may be polluted by noise), among which matrix completion (MC) [4]- [6] and matrix low-rank decomposition (MLRD) are two typical problems, and MLRD is also well known in some literature as robust principal component analysis (RPCA) [7]- [9].
MC can be modeled as the following mathematical problem: where ⊆ [n 1 ]×[n 2 ] ([n 1 ]={1, · · · , n 1 } , [n 2 ]={1, · · · , n 2 }) is the index set of sampling elements, D is the observation matrix, A is the low-rank matrix to be recovered, A * represents the kernel norm of the matrix A, P (•) is the projection operator, which is defined as follows: So far, MC has been widely used in image restoration, recommendation system, machine learning and other fields. However, the sampling data in the MC are generally considered to be accurate (i.e., noise-free). In fact, the observation data are often polluted by noise, how to recover the original low-rank matrix from the observation data polluted by noise is MLRD problem. This problem was proposed by Candes, and Y. Ma et al. [10]- [15] in 2009 and has been widely used in image processing [16], video background modeling, signal recovery and other fields. MLRD can be modeled as the following mathematical problem: where D is the observation matrix, A is the low-rank matrix to be recovered, E is the sparse noise matrix, A * represents the kernel norm of the matrix A, λ is the weighted parameter, E 1 represents the L 1 norm of the matrix E (i.e., the sum of absolute values of all elements of the matrix E).
After MLRD was proposed, Lin et al. [14] proposed an equivalent model of the MC model (1), that is, MC was regarded as a special case of MLRD, and its modeling was as follows: MLRD has been widely concerned since it was proposed, and various algorithms have been proposed. It can be divided into convex optimization algorithms and non-convex optimization algorithms. Convex optimization algorithms include the interior point algorithm [10], [17], the iterative threshold (IT) algorithm [6], [13], [18]- [21], the accelerated proximal gradient (APG) algorithm [22], the exact augmented Lagrange multiplier (EALM) method, the inexact augmented Lagrange multiplier (IALM) method [14], the alternating direction method of multipliers (ADMM) and the alternating linearization method (ALM) [23]; and non-convex optimization algorithms include the principal component analysis method [24]- [27], matrix decomposition method [28], and the method of replacing the kernel norm with the non-convex function [29]- [36]. This paper focuses on convex optimization algorithms. In convex optimization algorithms, the comprehensive performance of the IALM method is the best, but in some specific scenarios, its effect can not meet the actual needs. For example, for data with column noise, Tang et al. [37] proposed a robust principal component analysis model based on low-rank and block sparse matrix decomposition; in the field of computer vision, Cao et al. [38] proposed a total variation regularization MLRD model. In the harmonic recovery (HR) problem, the IALM algorithm will ignore some structure characteristics of sparse matrix. Therefore, it is necessary to improve the MLRD model or IALM algorithm in this scenario.
The main contributions of this paper are as follows: 1) Aiming at the situation that the sparse matrix is diagonal sparse matrix, the improved strategy of the convex optimization model (3) is deeply analyzed, and it is pointed out that under a certain improved strategy, the model (3) can be transformed into a simpler model (4), and the relationship between MLRD and MC is explained from a new angle. After that, a model for solving HR problem and the improved IALM algorithm are proposed.
2) The performance of the original and improved IALM algorithm is compared when the sparse matrix is diagonal sparse matrix satisfying certain conditions. The sensitivity and convergence of the improved IALM algorithm are discussed. The improved strategy of the model is extended to two kinds of occasions in which the positions of non-zero elements in sparse matrix are fixed or random.
3) It is proved that the HR problem can be transformed into a MLRD problem. The original IALM algorithm and the improved IALM algorithm are used to solve the HR problem. The performance of the two algorithms is compared through experiments, and the experimental results show that the improved IALM algorithm have better solution performance.

II. RELATED WORKS A. THE CLASSICAL ALGORITHMS OF MLRD
In order to better compare the advantages and disadvantages of different algorithms, this part first analyzes the advantages and disadvantages of classical algorithms for MLRD, and proposes an improved strategy to transform the model (3) into the model (4) when sparse matrix is a diagonal sparse matrix, and then proposes a model for solving HR problem and an improved IALM algorithm.
In view of the shortcomings of the interior point algorithm such as small application scale and slow convergence speed, Wright et al. [13] proposed the IT algorithm. The advantage of the IT algorithm is that the algorithm is simple and easy to implement, but the disadvantages are that the parameter selection is difficult, the convergence speed is slow, and the iterative sequence converges to the approximate solution of the convex optimization model (3). In view of the shortcomings of the IT algorithm, Lin et al. [22] proposed the APG algorithm. Compared with the IT algorithm, the convergence speed of the APG algorithm has been significantly improved, but the convergence speed still can not meet the needs of dealing with practical problems, and there is still the disadvantage that the iterative sequence converges to the approximate solution of the model (3). After that, Lin et al. [14] proposed the augmented Lagrange multiplier method, including the EALM and IALM. Compared with the previous algorithms, the two methods have a greater improvement in convergence speed and solution accuracy. After that, the ADMM and ALM were proposed respectively, and it can be proved that they are essentially equivalent to IALM. VOLUME 8, 2020 The IALM method is a classical algorithm with the best comprehensive performance for solving the convex optimization model (3). However, in some specific application scenarios, the sparse matrix often has certain structural characteristics, such as the diagonal sparse matrix, the column sparse matrix, the block sparse matrix [39] and so on. In these cases, the solution obtained by applying the IALM algorithm directly often deviates from the real solution, the main reason is that in the application of the IALM algorithm to solve the convex optimization model (3), every element in sparse matrix is treated equally, and the structural characteristics of sparse matrix are ignored. Therefore, improving the original IALM algorithm or model (3) for specific application scenarios will undoubtedly improve the overall performance of the algorithm and make the algorithm have better application effect.
In order to improve IALM algorithm, we need to understand its basic principle. When solving the model (3), the idea of IALM algorithm is to alternately update the variables in the augmented Lagrange function of model (3), and its specific steps are as follows: Input In this paper, aiming at the situation that the sparse matrix is a k-diagonal sparse matrix satisfying certain conditions, an improved strategy is proposed to simplify the model (3) to the model (4), and then the model (4) is solved by the IALM algorithm. The definition of the k-diagonal matrix is as follows: Suppose that the element in the ith row and the jth column in the matrix E is E ij , if i − j = p, then E ij is located on the pth diagonal of the matrix. The k-diagonal matrix (k is an odd number) is defined as a kind of matrix with non-zero elements on the 0th, ±1th, ±2th, · · · , ± (k − 1) /2th diagonals and zero elements at other positions.
Now we improve the model (3) when the sparse matrix is a k-diagonal sparse matrix. Considering that the convex optimization model (3) treats every element in the sparse matrix equally and ignores the structural characteristics of the sparse matrix, we can add the structural characteristics of the sparse matrix to the model (3). For the case of the k-diagonal sparse matrix, we can use the projection operator P (•) in the model (1), where is the index set of non-k-diagonal elements. By using the projection operator, we can add a constraint to the model (3) to obtain the following model: The constraint P (E) = 0 restricts that the matrix E must be a k-diagonal matrix, and when k is small, the k-diagonal matrix must be a sparse matrix. In practical application, k is often a small number known. Therefore, the constraint P (E) = 0 makes E a diagonal sparse matrix. Therefore, it is unnecessary to minimize the L 1 norm of E in the objective function (minimizing the L 1 norm of E means restricting E to be a sparse matrix). To sum up, the following convex optimization model can be obtained: It is noted that the model (6) is completely consistent with the model (4), the only difference is that all elements of D in the model (6) are observation data, while in the MC model (4), only some elements in D are observation data, and the unobserved data are often set to 0. This only means that the initial values of the unobserved data in the algorithm implementation are different, which indicates the model (6) is actually the model (4). Therefore, the MLRD in this scenario can be solved by MC theory. Moreover, since the application of MC theory in this scenario is actually adding the structural characteristics of sparse matrix, it can be predicted that the IALM algorithm of MC will achieve better results in this scenario.
When the k-diagonal sparse matrix satisfies other constraints, it is only necessary to add the constraints to the model (6). For example, in the sparse matrix of HR problem, in addition to the structural characteristic of the k-diagonal, it also contains the structural characteristic that the matrix elements on the 0th, ±1th, ±2th, · · · , ± (k − 1) /2th diagonal are equal to certain numbers x 1 , x 2 , x 3 · · · respectively. Add the structural characteristic to the model (6) to get the model (7): In this scenario, the description of sparse matrix E in the model (7) is more accurate. Therefore, using the model (7) in HR problem will undoubtedly achieve better results.
The IALM algorithm of MC has been proposed in reference [14]. When the sparse matrix is a k-diagonal sparse matrix, we need to pay attention to the following problems when applying the IALM algorithm of MC to MLRD: MC problem only requires the recovery of low-rank matrix A, and does not require the recovery of sparse matrix E. Therefore, in order to make the results of MC algorithm consistent with the objectives of MLRD problem, we do not need to set part of the elements in D to 0 as in reference [14] (in the scenario of the k-diagonal sparse matrix, the elements of k-diagonal position in D are set to 0). In fact, if the observation matrix D is directly used as the input matrix of MC algorithm, the meaning of the model (6) is consistent with that of the MLRD model (3). In this perspective, MLRD can be regarded as a special case of MC. For the convenience of the following description, the IALM algorithms for solving the model (3), (6), (7) are recorded as IALM-MLRD, IALM-MC and IALM-HR respectively. For the specific steps of IALM-MC, please refer to reference [14]. The specific steps of IALM-HR are as follows: where diag (X, k) is the vector of the elements on the kth diagonal of the matrix X, and mean (v) is the mean value of all the elements in the vector v.

III. CONVERGENCE AND SENSITIVITY OF IALM-HR A. CONVERGENCE ANALYSIS
Lin et al. [9] have proved the convergence and convergence rate of IALM-MC. The main results are as follows: Theorem 1: For IALM-MC, any accumulation point A * of A k is an optimal solution to (6) and the convergence rate is at where f * is the optimal value of the model (6). It can be proved that these theorems are also applicable to IALM-HR, and the proof process is the same as that of IALM-MC.

B. SENSITIVITY ANALYSIS 1) SENSITIVITY TO INITIAL VALUES
Because the convex optimization model only has the global optimal solution, there is no local optimal solution. Therefore, for any initial value, under the condition of algorithm convergence, IALM-HR can always converge to the global optimal solution of the model, that is, the algorithm is not sensitive to the initial value.

2) SENSITIVITY TO RANK OF MATRIX A AND SPARSITY OF MATRIX E
Because the model assumes that the rank of the matrix A is low and the matrix E is sparse, the algorithm will not get good results when the rank of A or the sparsity of E exceeds a certain limit. This paper compares the sensitivity of IALM-HR and IALM-MLRD to the rank of matrix A and the sparsity of matrix E through experiments. The implementation of this section and subsequent experiments are run on 2.5GHz Intel (R) i5-7300hq quad core processor with 8GB memory, windows 10 operating system and 2018b matlab version.
The experimental data are set as follows: In this paper, we only discuss square matrix, and the related results are easy to extend to the case of general matrix. The low-rank matrix A ∈ R m×m is obtained by the product LU of matrix L ∈ R m×r and U ∈ R r×m , where r represents the rank of matrix A, m = 100, the elements in L and U are independent Gaussian variables with mean value of 0 and variance of 1, the elements on the 0th, ±1th, ±2th, · · · diagonal of diagonal sparse matrix E are equal to certain number x 1 , x 2 , x 3 · · · respectively, and x 1 , x 2 , x 3 · · · independently obeys uniform distribution U (0, 100), observation matrix D = A + E.The details of algorithm parameters are shown in reference [14].
In order to compare the performance of the two algorithms, the number of iterations of the algorithms, the rank of the recovered matrix and the error of the recovered matrix are selected for comparison. The error is calculated by where, A * represents the real low-rank matrix A, A ∧ represents the estimation of the low-rank matrix A obtained by the algorithm and X F represents the Frobenius norm of the matrix X(i.e., the arithmetic square root of the sum of squares of all elements of a matrix).
When studying the sensitivity to the rank, a 3-diagonal sparse matrix E is randomly generated, and then the matrix sequence A n (n = 1, 2, · · · , 80) with rank of 1-80 is randomly generated, and the matrix D = A n + E is decomposed, and the results are shown in figures 1-3; when studying the sensitivity to the sparsity, a low-rank matrix A with rank 5 is randomly generated, and then the k-diagonal matrix sequence E n (n = 1, 2, · · · , 40) is randomly generated, where k = 1, 3, 5, · · · , 79, and the matrix D = A + E n is decomposed, and the results are shown in figures 4-6.
It can be seen from figures 1-3 that when the rank of the low-rank matrix A increases to about 30, the error of the matrix recovered by IALM-MLRD begins to increase   rapidly, and the rank of the recovered matrix also begins to deviate from the real rank, however, these phenomena are not found in IALM-HR, which means that IALM-MLRD is no   longer applicable when the rank of the low-rank matrix A is larger 30. As can be seen in figure 1, when the rank of the low-rank matrix A increases to about 67, the number of iterations of IALM-HR begins to be more than that of IALM-MLRD. In fact, at this time, IALM-MLRD is no longer applicable, so it is meaningless to compare the number of iterations. To sum up, the sensitivity of IALM-HR to the rank of low-rank matrix A is lower than that of IALM-MLRD. The same is true for the analysis of figures 4-6.
It can be concluded from figures 1-6 that the sensitivity of IALM-HR to the rank of matrix A and sparsity of matrix E is lower than that of IALM-MLRD, so IALM-HR is more stable.

3) SENSITIVITY TO NOISE
The MLRD model assumes that the observation matrix D is composed of the sum of low-rank data matrix A and sparse noise matrix E. The model is also known as RPCA. Its purpose is to improve the resistance of principal component analysis to various kinds of noise. The classical algorithms can get good solution for all kinds of sparse noise matrix E, that is, the algorithms are not sensitive to all kinds of sparse noise E.
However, in some applications, the sparse noise matrix E is also regarded as a useful part of the data, and each element of the observation matrix D has an observation error, that is, the observation matrix D is actually composed of low-rank data matrix A, sparse data matrix E and error matrix F. In fact, some scholars regard this kind of problem as a variation of the MLRD model and study it. This part of research is not within the scope of this paper, and this part is not discussed in this paper.

4) GENERALIZATION OF IMPROVED STRATEGY
In fact, the improved strategy from the model (3) to the model (6) provides a reference for model improvement in various application scenarios, that is, changing constraints or changing objective functions. For different application scenarios, as long as the prior information under the scene is fully added to the model, it is expected to further improve the solution accuracy or convergence speed of the algorithm.
The reason why IALM-MC is successfully applied to the MLRD is that the ''diagonal sparsity'' feature of the diagonal sparse matrix is added to the constraint conditions, and the function of the condition in the objective function is analyzed, and the L 1 norm term in the objective function is omitted. In fact, the ''diagonal sparsity'' feature of the sparse matrix indicates the positions of non-zero elements in the sparse matrix. Therefore, from a broader perspective, as long as the positions of non-zero elements in the sparse matrix are determined, such as diagonal positions, upper triangular positions, upper left corner positions, etc., IALM-MC can be used to decompose the observation matrix. Since MC model actually contains the location information of non-zero elements in the sparse noise matrix, it can be predicted that the solution accuracy and convergence speed of MC theory will be better than those of classical MLRD algorithms in the scenario that the positions of non-zero elements in the sparse matrix are determined.
In addition, for the MLRD in general cases, since the noise's positions are random and the noise matrix is sparse, if some elements are randomly sampled from the observation matrix, when the sampling rate meets certain conditions, there is a great probability that the sampling data will not be polluted (because the noise matrix is sparse). Based on the above facts, we can further study that when the sampling rate reaches how much, IALM-MC will be significantly better than IALM-MLRD (the prior information in MC model is more than that in MLRD model, therefore, such sampling rate must exist). If the sampling rate satisfies the condition that the sampling data are not polluted in a high probability at the same time, we can use IALM-MC to do MLRD. If the probability that the sampling data are not polluted is 0.95, that is, the results of IALM-MC are better than those of IALM-MLRD about 95 times in 100 experiments. Therefore, if we want to generalize the theory to the general case where the noise's positions are random, we need to do the following work: a: Find a proper sampling rate, at this sampling rate, IALM-MC is significantly better than IALM-MLRD.
b: The probability that the sampling data are not polluted is analyzed, and look for the optimal sampling rate under the higher probability.

IV. THE APPLICATION OF THE MLRD ALGORITHM IN HR A. DESCRIPTION AND MODEL OF HR
The key task of HR is to estimate the frequency and amplitude of harmonic signal in different noise background. According to different noise background, it can be divided into additive noise and multiplicative noise [40]. In this paper, the recovery of a kind of harmonic signal polluted by additive noise is studied.
The real harmonic observation signal polluted by additive noise is usually modeled as follows: Among them, p is called the number of harmonic components, A k , ω k and ϕ k are the amplitude, frequency and phase of the kth harmonic component respectively, and n (t) is called additive noise. The model (9) is commonly called the (one-dimensional) real harmonic signal model in additive noise, also known as constant amplitude real harmonic model or real harmonic model in additive noise. The recovery problem of this kind of harmonic signal is: how to get the value of frequency ω 1 , ω 2 , · · · , ω p and amplitude A 1 , A 2 , · · · , A p of each harmonic component from some information of observed signal x (t). In order to solve this problem, we make the following assumptions: Assumption 1: ω k ∈ (0, π) , k = 1, 2, · · · , p and they are not equal to each other: Assumption 2: ϕ 1 , ϕ 2 , · · · ϕ p i.i.d ∼ U (−π, π]: Assumption 3: n (t) is a stationary zero mean noise independent of ϕ 1 , ϕ 2 , · · · ϕ p . VOLUME 8, 2020 Under the above assumptions, the following properties of harmonic signal can be obtained: Property 1: The correlation function of observation signal and noise signal satisfies the following relationship: Property 2: Let s (t) = p k=1 A k cos (ω k t + ϕ k ), the correlation function r s (τ ) of the real signal s (t) satisfies the following equation: Properties 1-2 show that the correlation function of the observed signal x (t) in the model (9) is equal to the sum of the correlation functions of the real harmonic signal s (t) and the additive noise n (t).
If the m-order correlation matrices of x (t), s (t) and n (t) are denoted as R x , R s and R n respectively, then according to the properties 1-2, there are where It can be proved that when m ≥ 2p, the rank of R s is 2p, and R s is obviously non-sparse. Therefore, when m is large enough, R s is a non-sparse low-rank matrix.
It can be proved that r n (τ ) is approximately zero when τ is large. Therefore, when m is large enough, R n is a diagonal sparse matrix.
According to the above analysis, the correlation matrix R x of observation signal x (t) is equal to the sum of a lowrank matrix R s and a sparse matrix R n . Therefore, we can use the MLRD to decompose R x to get the estimation R ∧ s of the correlation matrix of the harmonic signal s (t), and then get the estimation r ∧ s (0) , r ∧ s (1) , · · · r ∧ s (m − 1) of the correlation functions, and then use the equation (11) to get ω 1 , ω 2 , · · · , ω p and A 1 , A 2 , · · · , A p .
For the solution of the equation (11), we use the least square method to construct the optimization problem (11), and then use the global search algorithm to get ω 1 , ω 2 , · · · , ω p and A 1 , A 2 , · · · , A p .
B. THE MLRD FOR HR

1) PERFORMANCE COMPARISON OF THE TWO ALGORITHMS
In this section, we use IALM-MLRD and IALM-HR to solve the HR problem, and compare their performance. The experimental data and steps are as follows:

a: GENERATION OF FREQUENCY AND AMPLITUDE PARAMETERS OF REAL HARMONIC SIGNAL
In order to facilitate the study, the harmonic component number p = 2 is taken, and the amplitude and frequency of real harmonic signal are generated randomly within the range of parameters. The specific data generated in the experiment are ω 1 = 1.5, ω 2 = 2.5, A 1 = 4, A 2 = 5.

b: GENERATION OF CORRELATION MATRIX OF REAL SIGNAL
Take m = 30, use the equation (11) to get the correlation functions of real harmonic signal and arrange them into the correlation matrix R s .

c: GENERATION OF CORRELATION MATRIX FOR ZERO MEAN STATIONARY NOISE
Taking random telegraph signal as noise signal, the probability that telegraph signal x (t) is 5 or −5 at any time t is 0.5 respectively. N (t) represents the number of signal changes in [0, t), and {N (t) , t > 0} is Poisson process with λ = 3. The correlation function r n (τ ) of the noise signal is equal to I 2 e −2λτ . When m = 30, the values of the correlation functions are calculated and arranged into the correlation matrix R n .

d: THE LOW-RANK DECOMPOSITION OF THE CORRELATION MATRIX OF THE OBSERVED SIGNAL
Using IALM-MLRD and IALM-HR to decompose the observation matrix R x , which is equal to the sum of R s and R n .Then the estimation R ∧ s of the correlation matrix of the real signal and the estimation r ∧ s (0) , r ∧ s (1) , · · · r ∧ s (m − 1) of the correlation functions are obtained.

e: USING THE ESTIMATION OF REAL SIGNAL CORRELATION FUNCTION TO SOLVE AMPLITUDE AND FREQUENCY
The estimation r ∧ s (0) , r ∧ s (1) , · · · r ∧ s (m − 1) of the real signal correlation functions are substituted into the model (13), and the frequency ω 1 , ω 2 and amplitude A 1 , A 2 are obtained by the global search algorithm.
In order to compare the performance of the two algorithms, the number of iterations n of the algorithms, the error e of the recovered matrix and the parameter error e 2 are selected for comparison. The error e is calculated by where, A * represents the real lowrank matrix A and A ∧ represents the estimation of the low-rank matrix A obtained by the algorithm. The error e 2 is calculated by p ∧ − p * 2 / p * 2 , where, p * represents the real parameter vector (i.e., the real ω 1 , ω 2 , A 1 , A 2 in order into a vector) and p ∧ represents the estimation of the real parameter vector obtained by the algorithm. x 2 is the 2-norm of the vector x (i.e., the arithmetic square root of the sum of squares of all elements of a vector).
Under the condition of e = 1e−7, the experimental results are as follows: Under the condition of n = 20, the experimental results are as follows: It can be seen from table 1 that the number of iterations n of IALM-MLRD and IALM-HR are 47 and 18 respectively; table 2 shows that the errors e of the matrix recovered by IALM-MLRD and IALM-HR are 3.63e-04 and 1.29e-08 respectively. As for the parameter error e 2 , it can be seen from theoretical analysis that it is related to the error e of recovered matrix and the global search of nonlinear equations, but in general, the smaller e is, the smaller e 2 is. Therefore, in table 1, because e is the same, there is little difference in e 2 . In table 2, we can see that the smaller e is, the smaller e 2 is. In this experiment, when e 2 ≤ 1e − 4, the calculated final parameters are consistent with the real parameters with two decimal places, so the amplitude and frequency parameters are not listed in the table.
Combining the above two tables, it can be concluded that the ialm-hr is better than the IALM-MLRD in the number of iterations n and the error e of the recovered matrix, that is, the solving efficiency of IALM-HR is higher.

2) DISCUSSION ON THE ESSENCE OF ALGORITHM SPEED UP
This section mainly compares the performance of IALM-MLRD and IALM-HR. The experimental results show that under the condition of the model (7) and the same precision, the solution speed of IALM-HR is faster than that of IALM-MLRD. The fundamental reason is that the objective function of the model solved by IALM-HR is more accurate than that of IALM-MLRD, and the iterative scheme of E becomes more accurate, so the algorithm will converge faster.

V. CONCLUSION
MLRD has been widely used in image denoising, video background modeling, signal recovery and other fields. However, in some specific applications, the classical algorithms of MLRD often ignore some prior information in the specific scene, which results in a large deviation of the solution accuracy or convergence speed with the actual demand. Aiming at this defect of the classical algorithms, this paper proposes an improved strategy, which successfully applies IALM-MC to the MLRD problem, and the improved strategy is generalized to more general scenarios. In addition, this paper proposes IALM-HR for HR, and uses IALM-MLRD and IALM-HR to solve the HR problem. The experimental results show that, in the HR problem, IALM-HR often has better performance.
However, in some practical applications, the observation data are often affected by the observation error matrix F. In fact, when dealing with the HR problem, the observation matrix is usually not generated as in the experiment, but the real correlation matrix R x of the observed signal is estimated in a certain way, and then the estimation matrix is taken as the observation matrix. In this case, the observation matrix is not only composed of low-rank matrix and sparse matrix, but also has estimation error. Although the value of estimation error is very small, it will affect the experimental results. Therefore, if the robustness of the observation matrix to the error matrix can be further improved, the application range of MLRD will be further expanded. In addition, non-convex optimization algorithms are expected to further improve the accuracy of the model. Therefore, the improved strategy can be applied to the improvement of non-convex optimization algorithms in the following work to further broaden the application scenarios of the model or improve the performance of the algorithm.
QIXIANG LIAO received the Ph.D. degree from the National University of Defense Technology, Nanjing, China, in 2019. He is currently a Lecturer with the College of Meteorology and Oceanography, National University of Defense Technology, Changsha. His research interests include optimization methods on atmosphere and ocean satellite remote sensing.
DELI LIANG received the B.E., M.E., and D.E. degrees from Harbin Engineering University, Harbin, China. She is currently a Senior Engineering with the Beijing Institute of Astronautical System Engineering, Beijing, China. Her research interests include atmosphere, space physics, and dynamic environments prediction. VOLUME 8, 2020