A Modified Conjugate Descent Projection Method for Monotone Nonlinear Equations and Image Restoration

In this article, we propose a modified conjugate descent (CD) projection algorithm for solving system of nonlinear monotone equations with convex constraints. The search direction in this algorithm use a convex combination of the steepest descent algorithm and the well-known CD method. The algorithm proves to be quite efficient for solving large scale monotone nonlinear equations, as it has low storage requirement and does not need the computation of Jacobian matrix. We prove the convergence of the algorithm using some conditions, and perform numerical experiments on some test problems. In order to show the efficiency of our proposed algorithm, the numerical performance is compared with some existing algorithms. Finally, by reformulating $\ell _{1}$ regularized problem into monotone equation, we successfully apply the algorithm to restore some blurred images. The numerical results obtained prove that the algorithm can be used as an efficient and qualitative solver for image restoration problems.


I. INTRODUCTION
This article aim at finding solution to the nonlinear equation where is nonempty closed and convex, H : R n → R n is continuous and monotone. Monotone means for all x, y ∈ R n H (x) − H (y), x − y ≥ 0.
Solving nonlinear system of monotone equations is crucial due to their occurence in many applications. They appear as subproblems in proximal algorithms [1], and also in monotone variational inequality problems [2], [3]. In addition, The associate editor coordinating the review of this manuscript and approving it for publication was Wei Zhang. monotone nonlinear equations are applicable in 1 norm regularized optimization problems to solve image restoration problems [4]- [6]. Interested readers can refer to [7]- [9] for further reading on image restoration. Recently, an application of nonlinear equation in financial forecasting is also considered in [10].
For solving system of nonlinear equations, a lot of methods have been developed. The prominent and the early method, is the Newton method [11] followed by quasi-Newton methods [12]- [17]. Due to the good numerical performance and the quadratic convergence of the Newton's method, it was termed as a very good and efficient method for solving nonlinear equations when the operator under consideration is continuously differentiable. However, in this article, equation (1) is Lipchitz and monotone. Hence some assumptions for which Newton method can be applied are not satisfied. Moreover, Newton method has other shortcomings including computation of Jacobian matrix and solving linear system at each iteration, and when the dimension is very large, the method fails in this aspect.
Linear conjugate gradient methods were first proposed independently by Hestenes and Stiefel to solve linear systems which was the foundation of many studies of conjugate gradient methods. More focus were later given to the study of conjugate gradient methods after the joint work of Hestenes and Stiefel. Conjugate gradient methods and their modified versions [18]- [24] were later extended for solving both unconstrained optimization problems and nonlinear monotone equations. The conjugate gradient methods proposed for solving unconstrained optimization problems can be categorized into two groups; the Fletcher and Reeves (FR) [25], Dai-Yuan (DY) [26] and the CD [27] having the same numerator in the conjugate parameter, while the second category includes the Hestenes and Stiefel (HS) [28], Polak-Ribière-Polyak (PRP) [29] and Liu-Storey method (LS) [30]. The FR, DY, and the CD algorithms are known for their good convergence behavior as opposed to methods in the second category with good numerical performance [31].
On the other hand, Cheng [32] proposed a conjugate gradient algorithm for solving monotone equations. His method was based on the combination of the famous PRP conjugate gradient method [29], [33] and the hyperplane projection method. Xiao and Zhu [6] used the projection technique [34] and the CG parameter of Hager and Zhang [35], to developed a projected conjugate gradient method for solving nonlinear monotone equations. Liu et al. [36] combined two sufficient descent conjugate gradient methods with a hyperplane projection technique to solve nonlinear convex constrained monotone equations. Hu and Wei [37] extended Wei-Yao-Liu conjugate gradient method for solving unconstrained optimization problem to solve nonlinear monotone equations. Liu and Li [38] proposed a CD-like projection method to solve convex constrained nonlinear monotone equations. Papp and Rapajić [39] combine conjugate gradient method and hyperplane projection technique and proposed a new Fletcher and Reeves [25] type directions in the frame of algorithm for solving nonlinear monotone equations. Based on the work of [32] and [39], Abubakar and Kumam [40] extended the method in [41] and presented an efficient conjugate gradient method for solving nonlinear monotone equation. They achieved this by combining the conjugate gradient method in [41] with a hyperplane projection method. Moreover, Dai and Zhu [42] combined the hyperplane projection method in [34] with the modified HS method in [43] and presented an algorithm for solving monotone nonlinear equations. They proved the global convergence of their method under some assumptions and showed its efficiency numerically in comparison with other methods.
Recently, Yuan et al. [44] proposed a modified LS-like conjugate gradient method for solving nonlinear equations where the function under consideration is continuously differentiable. Their method uses a convex combination of the steepest descent and a modified version of the LS method [30]. This method satisfies both sufficient descent and trust region properties. They proved the global convergence of the method and showed its effectiveness in solving nonlinear equations. However, the method imposed some restrictions in step 3 and 5 of the algorithm.
The motivation of this work is that instead of considering nonlinear equations where the operator under consideration is continuously differentiable as in [44], we are focusing on solving nonlinear equations, where the operator is Lipschitz continuous and monotone. Also, the restrictions in step 3 and 5 of the algorithm in [44] were not considered in our case. In addition, to our knowledge, there are limited CD-like conjugate gradient algorithms for solving nonlinear monotone equations. Therefore, we aim to test how effective a CD-like conjugate gradient algorithm would be in solving monotone nonlinear equations. To achieve this, we consider a suitable line search, and replace the LS parameter in [44] with the CD parameter. Moreover, We proved the convergence of the proposed algorithm, show its strength numerically, and also, its efficiency in restoring blurred images.
The organization of the paper is as follows. We present the proposed algorithm together with its convergence in the second Section. In the third Section, we conduct numerical experiments to depict the strength of the proposed method for finding solutions to (1) and image restoration problems. Finally, we concluded the paper in Section four.

II. ALGORITHM: MOTIVATION AND CONVERGENCE RESULT
In this section, we give the definition of projection map, motivation, and the description of the proposed algorithm.
Definition 1: Let ⊂ R n be a nonempty, closed and convex set. The projection of any x ∈ R n onto is It is worth noting that P satisfy the inequality Conjugate gradient methods for solving (1) generally uses the iterative formula where α k represents the step-length along the direction d k and can be obtained from some line search. x k and x k+1 represent the current and next points of iteration, respectively, and d k is a search direction defined as where β k is called the conjugate gradient parameter and The search direction usually satisfies the inequality where c > 0. VOLUME 8, 2020 Different conjugate gradient methods for solving (1) are developed using different search directions. In particular, Yuan et al. [44] proposed a new search direction d k by taking a convex combination of the steepest descent with a modified LS method. The direction is given by where χ ∈ (0, 1), andŝ k from Li and Fukushima [45], [46] is given aŝ The term L k can be found in the work of Birgin and Martinez [47] and has the following property which implies L k ∈ (0, 1). Inspired by the direction (6), we proposed a new search direction corresponding to a new conjugate gradient algorithm for solving (1). Our approach is based on replacing the LS parameter in (6) with the CD parameter, hence we came up with the following search direction where L k is given as in (7). Now we proceed with the steps of our proposed algorithm for solving (1) as follows.
In order to show the global convergence of the MCDPM algorithm, we need the following assumptions and Lemmas: (G 1 ) The mapping H is monotone, that is, (G 3 ) The solution set of (1), denoted byˆ , is nonempty. Lemma 1: Let {d k } be given by (8). Then there exists c > 0 such that (5) holds.
Step 1. If H k ≤ Tol, stop, otherwise proceed with Step 2.
Else compute Step 5. Let k = k + 1 and go to Step 1. Therefore, {z k } generated by Algorithm 1. Then Proof: From (9), if α k = 1, then α k = α k β −1 does not satisfy (9), that is, Using Lemma 1 and assumption (G 2 ), we have The desired result is obtained after solving for α k . and lim k→∞ x k+1 − x k = 0. (13) Proof: To prove {x k } and {z k } are bounded, from the non expansiveness property (2), for anyx ∈ˆ and taking x = x k+1 , y =x, we have This shows that the sequence { x k −x } is a decreasing sequence, and hence {x k } is bounded. In addition, combining this with assumption (G 2 ), we can find c 1 ≥ 0 such that using Cauchy-Schwarz inequality and (9), we obtain Therefore, showing that {z k } is bounded. Again, using assumption (G 2 ), we can find another constant c 2 ≥ 0 such that H (z k ) ≤ c 2 for all k ≥ 0. This together with (14) give us adding (15) for k = 0, 1, 2, · · · , we have Applying triangular inequality, we have By applying Cauchy-Schwarz on Letting From (17), we arrived at (18) which show that the direction is bounded. The next theorem is proved by contradiction, and the boundedness of the direction is required to establish the contradiction in (22).

III. NUMERICAL EXPERIMENTS
In this section, we present some numerical experiments of our algorithm (MCDPM), and compare its performance with some existing algorithms. Using some test problems, we compare the performance of MCDPM with three other algorithms. Specifically, Algorithm 2.1 proposed by Zheng et al. [48], Self-adaptive three-term conjugate gradient method (SATCGM) algorithm by Wang et al. [49] and the NLS algorithm proposed in [44]. In MCDPM, Algorithm 2.1 and SATCGM, Solodov and Svaiter line search [34] is used whereas the NLS algorithm considered a different line search. It is worth noting that all codes are written on Matlab R2019b and executed using a PC of corei3-4005U processor, 4GB RAM and 1.70GHZ CPU. We considered seven problems with eight different initial points. All the problems are tested on five different dimensions, starting from n = 1000, n = 5000, n = 10, 000, n = 50, 000 and n = 100, 000. We set H k < 10 −5 as the stopping criteria. In NLS, Algorithm 2.1 and SATCGM all parameters are maintained as they are in [44], [48], [49] respectively. As for MCDPM, σ = 0.0001 is used whereas for β, different values were tested in the interval (0, 1), but β = 0.6 was observed to give the best result. Finally, γ and χ are taken as 1.8 and 0.1 respectively.
Shown in the following tables are the results obtained, comparing MCDPM with Algorithm 2.1 and SATCGM. The tables of comparison between MCDPM and NLS can be found in https://documentcloud.adobe.com/link/review?uri= urn:aaid:scds:US:ec7a57a7-4904-433a-911a-e9c0080b8a88# pageNum=1. We denote ITER to represent the number of iterations, FVAL as the number of function evaluations, TIME as the CPU time measured in seconds, and Norm as the value of norm of the function when the solution is obtained. We used ''-'' to indicate when the iteration is bigger than 1000 and the stopping criteria is not satisfied.
Below are the list of the test problems considered in this work, where H (x) = (h 1 (x), h 2 (x), . . . , h n (x)) T . Problem 1 ( [50]): Exponential Function.   1, for i = 1, 2, . . . , n, and = R n + .  The obtained results clearly show that MCDPM outperforms Algorithm 2.1, SATCGM and NLS by solving a large proportion of the problems considered with less number of iterations and CPU time. In addition, using the well known Dolan and Morè performance profile [54], we also show graphically (see Figure 1, 2 and 3) the performance of the algorithms in terms of the number of iterations, CPU time and number of function evaluations respectively. In Figure 1 and 2, MCDPM solves 60% and 55% of the problems with the least number of iterations and CPU time respectively. In Figure 3, Algorithm 2.1 solves about 46% of the problems with lesser function evaluations as compared with MCDPM and SATCGM with about 38% and 20%, respectively. The NLS algorithm performed poorly in this experiment. This may be due to the fact that it was totally in a different setting with the other algorithms. It is specifically proposed for solving unconstrained nonlinear equations with entirely different line search and assumptions on the function considered. In general, MCDPM proved to be more efficient in solving systems of nonlinear monotone equations. This may be as a result of the direction being a convex combination of a modified CD-like method with the steepest descent.

A. APPLICATIONS IN IMAGE RESTORATION
Solving ill-conditioned linear systems of equation has been a great tool used in different branches of science including signal processing. One of the popular ways involves solving the following problem   where r ∈ R n , t ∈ R m , B ∈ R m×n (m n) is a linear operator, ρ ≥ 0, r 2 is the Euclidean norm of r and r 1 = n i=1 |r i | is the 1 −norm of r. Clearly, problem (23) is a convex VOLUME 8, 2020  unconstrained minimization problem. This kind problem appears in compressive sensing whenever the original signal is sparse in some orthogonal basis. So, solving (23) will give an exact restoration.
There are several iterative methods to solve (23) (see for example [4], [55]- [59]). The earliest among them was proposed by Figueiredo et al. [4] which is the most popular gradient based projection method used for sparse reconstruction. In this method, (23) is been expressed as a quadratic problem by expressing r ∈ R n into positive and negative parts as follows where x i = (r i ) + , y i = (−r i ) + for all i = 1, 2, . . . , n, and (.) + = max{0, .}. Also, we have r 1 = e T n x + e T n y, where e n = (1, 1, . . . , 1) T ∈ R n . So, the problem (23) can now be written as min x,y  From [4], equation (24) can be written as Problem (25) is quadratic programming problem since E is a positive semi-definite.
Later on, (25) was translated to a linear variable problem, equivalently, a linear complementary problem by Xiao and Zhu [6]. Moreover, the variable z solves the linear complementary problem provided that it solves the nonlinear equation: where H is a vector-valued function. The function H (z) is shown to be continuous and monotone in [5], [60]. This translated problem (23) into problem (1). As a result, the algorithm proposed in our work can be applied to solve it.
Here, we use Lena, Barbara and Aeroplane test images to test the performance of our algorithm in comparison with CGD algorithm [6] to restore blurred images. Each of the two algorithms are tested using metrics that show better restoration. Specifically, we reported the performance of the two algorithms based on signal-to-noise-ratio (SNR) given as   the structural similarity (SSIM) index, which quantify how similar the restored is to the original [61] and the peak signalto-noise-ratio (PSNR). The Figures 4-6 show the efficiency of each of the algorithms by comparing the original, blurred and the restored images. It is evident from the figures that both our algorithm (MCDPM) and the CGD can restored the blurred images. However, it is clear that our algorithm outperforms the CGD algorithm based on the metrics considered.

IV. CONCLUSION
This work proposed a modified conjugate descent projection algorithm for solving system of nonlinear monotone equations. The algorithm is based on modifying the direction considered in [44] by replacing the modified LS-like parameter with a modified CD-like parameter. The efficiency and suitability of the proposed algorithm for large scale problems is as a result of its low storage and lack of Jacobian matrix computation. Numerical results obtained from the problems considered proved that this algorithm is more efficient in comparison with some existing algorithms, specifically, those in [44], [48] and [49]. Moreover, the global convergence properties of the algorithm are proved under some appropriate assumptions. An application of the algorithm in image restoration problems is also considered. It is observed that the proposed algorithm proves to be more efficient in restoring blurred images with higher quality than the CGD algorithm. He is also the Director of the Computational and Applied Science for Smart Innovation Cluster (CLASSIC Research Cluster), KMUTT. He has over 700 scientific articles and projects either presented or published. His research interests include fixed point theory, variational analysis, random operator theory, optimization theory, approximation theory, fractional differential equations, differential game, entropy and quantum operators, fuzzy soft set, mathematical modeling for fluid dynamics and inverse problems, dynamic games in economics, traffic network equilibria, bandwidth allocation problem, wireless sensor networks, image restoration, signal and image processing, game theory, and cryptology. He has provided and developed many mathematical tools in his fields productively over the past years. Moreover, he is on editorial board for more than 50 journals and also delivers many invited talks on different international conferences every year all around the world.
PUNNARAI SIRICHAROEN received the Ph.D. degree in computing science from Ulster University, U.K. She was a Lecturer with the Department of Mathematics, King Mongkut's University of Technology Thonburi (KMUTT), Bangkok, Thailand, until 2019. She was a Research Associate with the School of Computing and Information Engineering, Ulster University. She is currently a Lecturer with the Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok. Her research interests include image processing, computer vision, machine learning, and mobile-cloud computing applications.
AUWAL BALA ABUBAKAR received the master's degree in mathematics and the Ph.D. degree in applied mathematics from the King Mongkut's University of Technology Thonburi, Thailand, in 2015. He is currently a Lecturer with the Department of Mathematical Sciences, Faculty of Physical Sciences, Bayero University Kano, Nigeria. He is the author of more than 20 research articles. His main research interest includes methods for solving nonlinear monotone equations with application in signal recovery and image restoration.
MAHMOUD MUHAMMAD YAHAYA received the B.S. degree in pure/applied mathematics from Gombe State University, Gombe, in 2016. He is currently pursuing the master's degree in applied mathematics with the King Mongkut's University of Technology Thonburi, under the supervision of Prof. Poom Kumam. He went on to participate in a one-year mandatory national service called NYSC, from January 2017 to January 2018. His research interests include centred around convex/non-convex optimization, unconstrained optimization methods, and their applications to image processing. VOLUME 8, 2020