Lp-Method-Noise Based Regularization Model for Image Restoration

Various regularization techniques have been sufficiently developed to improve the quality of the image restoration. By utilizing existing image smoothing operators, method noise provides a new way to formulate regularization functions. The so-called method noise refers to the difference of an image and its smoothed version, obtained by an image smoothing operator. It is concluded that the method noise of a clear image mainly contains edges and small scaled details, and should be as sparse as possible. Based on this conclusion, we introduce <inline-formula> <tex-math notation="LaTeX">$\text{L}_{\mathrm {p}}$ </tex-math></inline-formula>-norm penalty on the method noise, which can accurately describe its sparse prior distribution. We formulate an <inline-formula> <tex-math notation="LaTeX">$\text{L}_{\mathrm {p}}$ </tex-math></inline-formula>-method-noise based regularization model and analyze its advantages in terms of its solution and performance in image restoration. Specifically, the <inline-formula> <tex-math notation="LaTeX">$\text{L}_{\mathrm {p}}$ </tex-math></inline-formula>-norm penalty of the method noise is better than other forms of norm in removing noise and keeping the details. Moreover, a modified Bregmanized operator splitting algorithm is designed for the proposed model. Experimental results show that the proposed method can obtain better results than other method noise based regularization methods.


I. INTRODUCTION
Image restoration, including image denoising, deblurring, inpainting, etc., is one of the most important areas in imaging science. It aims at recovering an image from the degraded version by making use of some prior information such as sparsity, smoothness, and so on [1]- [5]. The degradation model of image restoration is usually given as where f ∈ R N (N = m × n denotes the size of image) is the observed image, H is some linear operator, u is expected as the ground-truth, and η is i.i.d. white Gaussian noise with variance σ 2 . Due to the fact that image restoration is an ill-posed problem, it needs some regularization indicated by the prior knowledge of the ground-truth, so that image restoration can get a stable solution. The image restoration problem is equivalent to solving min u Hu − f 2 2 + λJ (u), The associate editor coordinating the review of this manuscript and approving it for publication was Gerardo Di Martino .
here J (u) is called regularization function, which aims to guarantee some reasonable smooth or some kind of structure of the resulting image [6]- [8].
Well-known selections of the regularization term J (u) include total variation (TV) regularization, non-local TV or some sparse norm in transform domain [9]- [12]. However, each of these regularities exit some kind of shortcoming. How to further improve the restore quality, and preserve important structures like edges and details is still challenging. Buades et al. used the residual of the true image and its non-local means (NLM) estimation as the regularization term and obtained good results [13]. Specifically, the regularization term is where NLM f (u) denotes the non-local means of u and the weight is computed from the observed image f . Zhang et al. [14] proposed Bregmanized operator splitting (BOS) algorithm to solve (2) and obtained convergent results under the assumption that the regularization J (u) is convex. Inspired by BOS algorithm, authors in [15] developed a non-local as regularization term, this regularization term utilizes the ground-truth rather than the observed image to compute the weight. The model was solved by BOS algorithm where a restrictive stopping criterion is used. Authors in [16] suggested a general form of this kind of regularity, pointed out that L 1 -norm penalty is better than that of L 2 -norm, and formulated the regularization function where D τ (·) is some kind of filtering operator. At the same time, Plug-and-Play scheme is developed to improve the image restoration performance [17]- [20]. The main idea is to regard the proximal operator of the regularization function as an image denoiser D τ (u), and introduce any off-the-shelf image denoising operator into the image restoration problems. Generally speaking, u − D τ (u) is called method noise, which is defined as the difference of an image and its smoothed version, obtained by an image smoothing operator. Therefore, u − D τ (u) mainly contains edges, small scaled details, noise and so on.
In this work, we propose a novel regularization term. Basically, we impose an L p -norm penalty on the method noise, u − D τ (u) p p (0 < p ≤ 1), to enforce its sparsity. The main contributions of this paper can be summarized as follows: 1) The method noise u−D τ (u) is considered, which allows other denoiser D τ (u), such as TV, block matching and 3D filtering (BM3D) [21], neighborhood filtering (NF) [22] and so on. For example, BM3D outperforms NLM method in most case, and it can restores better results than NLM as denoiser. 2) Since that method noise u − D τ (u) of a clean image should be as sparse as possible, L p -norm constraint of u − D τ (u) is more reasonable than L 2 -norm and L 1 -norm, which can accurately depict the sparse prior distribution of image. Therefore, we propose L p -denoiser based regularization model for image restoration. 3) We present a modified BOS algorithm by adding an adaptive adjusting scheme on the parameters, which can force the residual error to decline in whole, and improve the convergence performance of the algorithm. From the point of algorithm structure, the modified BOS algorithm exquisitely combines the Plug-and-Play algorithm with the residual shrinkage method, so it can also be considered as extension version of Plugand-Play algorithm. The remainder of the paper is organized as follows.
In the next section we review some related work, including Plug-and-Play framework and BOS algorithm. We propose our model and numerical algorithm in Section 3. In section 4 we present experimental results obtained by our method, in which we discuss the key parameters, and evaluate our method by both objective metrics and visual effects. We conclude the paper and present some guidelines for future work in Section 5.

A. PLUG-AND-PLAY ALGORITHM
The Plug-and-Play (PnP) algorithm is a powerful framework to solve image restoration. For the minimization problem (2), the main idea of PnP based on the alternating direction method of multiplier (ADMM) is as follows [17]. Firstly, the minimization problem (2) can be equivalently converted into a constrained problem min and the corresponding augmented Lagrangian function is where w is scaled Lagrangian multiplier. The saddle point of L(u, v; w) includes the minimizer of (4), which can be obtained by solving a sequence of sub-problems It is observed that the sub-problem (6) can be regarded as a denoising step as it involves the prior regularization function J (v). For example, if J (v) = v TV , the minimization problem in (6) corresponds to the well-known ROF denoising method [2]. Based on this intuition, authors in [17] proposed a variant of above ADMM by suggesting that one does not need to specify J (v) before running ADMM. Instead, they replaced (6) by using an off-the-shelf image denoising algorithm or a filtering, denoted by D τ (·), to yield v (k+1) = D τ (u (k+1) + w (k) ).
Examples of D τ (·) include a wide variety of patch based approaches such as TV, NLM, BM3D, NF and so on.

B. METHOD NOISE AND BOS ALGORITHM
By introducing the regularization J (u) = u − NLM f (u) 2 2 , authors in [13] gave NLM based regularization model as Since the observed image f has been blurred and maybe corrupted by noise, the weight computed from it is unreliable. Authors in [14] proposed to improve the above model by computing the weight from the restored image u, that is To solve (10), they introduced Bregmanized operator splitting (BOS) algorithm, which combines the Bregman iteration and operator splitting into a unified framework Direct extension of problem (10) is replacing the NLM operator with a general image smoothing operator D τ (·). Authors in [16] analyzed that the method noise mainly contains edges, small scaled details and noise (if exists), concluded that the L 1 -norm penalty of the method noise is better than the L 2 -norm penalty. They proposed a weighted L 1 -method-noise regularization model, denoted as WL 1 -denoiser method,

A. THE PROPOSED MODEL
Based on above observation and analysis, it is concluded that u − D τ (u) reflects some prior information of the groundtruth. Here, D τ (·) can be regarded as the filtering result of some off-the-shelf operator such as TV, NLM, BM3D, and BF and so on. In Fig.1, we plot the histograms of u − D τ (u) and the empirical distribution of Gaussian, Laplacian, and Hyper-Laplacian, where D τ (·) selected as NLM and BM3D, respectively. It is observed that the method noise of a clean image should be as sparse as possible. The distribution of it is close to Hyper-Laplace distribution [23]- [25], and the bettermatched regularization function is L p -norm (0 < p < 1) function. Based on this, we impose L p -norm on the method noise and propose the sparse regularization model as where p is an important parameter. In this paper we model it in the range 0 < p ≤ 1, and further discuss its impact on the image restoration quality in the experimental section.

B. THE MINIMIZATION ALGORITHM
By applying BOS technique, the numerical algorithm to solve problem (12) can be formulated as

Algorithm 1
Generalized Soft-Thresholding Algorithm for (17) Input: Given p, s, x, J , we calculate y = GST p s (x) as following process.
To solve the sub-problem (14), we let η = u − D τ (v (k+1) ), and rewrite the sub-problem as The minimization problem (16) can be solved by the generalized soft-thresholding method where GST refers to the generalized soft-thersholding operator as expanded in Algorithm 1. Noting that the iteration number in it is empirically selected as J = 2 or 3 [24], with which we can obtain satisfactory results. As a result, we can update u as Combining equation (13), (18) and (15), we formulate the numerical algorithm for (12) in Algorithm 2. Comparing it with PnP, one can observe that, Algorithm 2 add the residual shrinkage GST ) back into the restored image, which includes useful edges and details. So it is better than Plug-and-Play algorithm in protecting the edge and detail information.

C. ADVANTAGE OF THE PROPOSED MODEL
It is noted that, if p = 2 and D τ (·) is selected as NLM denoiser, the proposed model will be degenerated to L 2 -NLM method (10), which can be solved by using the BOS algorithm.
If p = 1, it is observed that, in Algorithm 2, the updating schemes for v and w have no changes, while the updating for u will be degenerated to soft-thresholding method. This just corresponds the solution of the L 1 -denoiser model. So the proposed algorithm is the generalized method of L 1 -denoiser and WL 1 -denoiser method.
Furthermore, 1 2λ → +∞ when λ → 0, the updating of u degenerates into This leads to the following iteration which corresponds to PnP algorithm. In this sense, Algorithm 2 can be considered as extension version of the Plug-and-Play algorithm.
Comparing with the L 2 -NLM, WL 1 -denoiser, and PnP algorithm, one can appreciate the advantages of our method over others from the solution process. In the iteration scheme of all methods, v (k+1) is essentially an intermediate image and u (k+1) further refines v (k+1) . Different models lead to different refining strategies. Our method improves the quality of u (k+1) in this way: it first applies an image smoothing operator D τ (·) to the intermediate blurred image v (k+1) , yielding a smoothed clean version D τ (v (k+1) ) and a residual or the method noise v (k+1) − D τ (v (k+1) ), which mainly contains the edges, small scaled details and noise of v (k+1) . To restore the edges back while discarding the noise, it applies the generalized soft-thresholding operator on the method noise, and obtains a refined version u (k+1) . The L 1 -method-noise model refines v (k+1) in a similar way, the major difference is that it only corresponds the case p = 1 of our method, which is lack of adaptability and cannot always simulate the edges well. The PnP algorithm does not restore edges back. Actually, it refines v (k+1) directly by applying a smoothing operator D τ (·), thus may cause over smoothing of the edges. The L 2 -NLM method refines v (k+1) by the weighted average of v (k+1) and its smoothed clean version NLM(v (k+1) ), which may also cause edges to be over smoothed.

IV. IMPLEMENTATION DETAILS AND EXPERIMENTAL RESULTS
To validate the performance of the proposed model, we test on several standard test images as shown in Fig. 2. We first discuss the parameters and their choosing criterions, then compare the proposed method with several state-of-the-art methods. We utilize the normalized mean square error (NMSE), peak signal-to-noise ratio (PSNR) and structure similarity index (SSIM) as performance measures, which are defined as PSNR = 10 log 10 and SSIM = 2µ u µ u 0 (2σ 0 + c 2 ) respectively. Here, u 0 is the original image, u is the restored image, E is the expectation function, µ u and µ u 0 denote there means, σ 2 u and σ 2 u 0 are their variances, respectively. σ 0 is the covariance of u and u 0 , c 1 > 0 and c 2 > 0 are constants.
It is clear that the restoration results is better when NMSE is smaller, while higher PSNR and SSIM imply better quality of the restored image. It is worth noting that, we conduct ten random realizations for each group of experiment. In this case, the reported NMSE/PSNR/SSIM values are their average numbers.

A. CRITERION OF CHOOSING PARAMETERS
To start up the proposed Algorithm 2, some necessary parameters λ, τ, δ and p are needed to be given. p is model parameters, which will be particular analyzed in the following subsection. λ is the regularization parameters to keep balance among the terms in problem (11). τ and δ are the penalty parameters, respectively.
To further better the convergence performance of Algorithm 2, we introduce discriminative increasing scheme for the parameters. Instead of choosing constant parameter, we update λ by λ k+1 = γ λ k , and simultaneously, set τ k = α λ k and δ k = 1 (βλ k ). The adaptive updating rule for λ k is based on the residual error k , which is defined as By introducing constant γ > 1 and 0 < η < 1, we conditionally update λ k according to the followings scheme: • If k+1 < η k , then λ k+1 = λ k .
With above adaptive increasing parameters, Algorithm 2 can be enriched as a modified BOS algorithm, as shown in Algorithm 3. To evaluate the modified scheme, we test the variation tendency of k in the scenario of image deblurring for Lena, seen as in Fig.3. The results show that the modified scheme in Algorithm 3 avoids the residual error from bouncing too much, and force it to keep decline tendency in whole. To this extent, this can stop the algorithm from falling into some bad local minimum solution.   In the following experiments, we need explicitly value four parameters: λ 0 , α, β and γ . A large number of experiments show that our method works well with λ 0 ∈ (10 −3 , 10 −1 ), so we empirically set λ 0 = 10 −2 . Other three parameters can be found in each group of deblurring and inpainting experiments.

B. DISCUSSION ON PARAMETER P
Since p is an important parameter in the proposed model, we evaluate its influence on the image restoration performance through plenty of experiments. Taking the deblurring task for the image Lena as example, we report the average PSNRs and SSIMs of the restored images in Fig.4 and Fig.5.   It is observed that the deblurring results perform better with p ∈ (0.5, 0.8) than other scenarios. Especially, the maximum values of both PSNR and SSIM are approximately at p = 0.7. Without loss of generality, we take p = 0.7 as the representative selection of p in the following experimental tests.

C. COMPARISON WITH OTHER REGULARIZATION METHODS
The proposed model can be widely applied in image restoration. In this subsection, we evaluate its performance in two applications: image deblurring and inpainting. For the denoiser, we mainly consider NLM and BM3D. As a result, we compare the proposed method with the following state-of-the-art methods: L 2 -NLM, WL 1 -NLM, WL 1 -BM3D, PnP-NLM, and PnP-BM3D [15]- [17].

1) IMAGE DEBURRING
As for image deburring, all images are gray-scaled with size 256 × 256. We mainly report experimental results under two forms of blur: average blur and motion blur. The average blur is simulated by applying box filter of size 5 × 5 and 7 × 7, respectively. The larger the size, the heavier the image is blurred. The motion blur consists of two causes: rotation un-clockwise by an angle θ (in degree) and shifting by L (in pixel). We consider H 1 : (L, θ) = (5, 10 • ), and H 2 : (L, θ) = (7, 20 • ), respectively. Furthermore, we also add i.i.d. white Gaussian noise in the experiments in order to show the robustness of our method to noise. During the experiment, the stopping criteria is set as k+1 < 10 −4 or the iteration number k ≥ 50. By plenty of experiments, we empirically set α = 10 −2 , β = 1, γ = 1.2.  To evaluate the convergence performance of the proposed method, we report the variation trends of NMSE, PSNR and SSIM, respectively. Seen as in Fig.6. From the plots, it can be found that the proposed L p -denoiser method significantly outperforms several related methods. Specifically, the NMSE index of L p -BM3D (p = 0.7) decays the most rapidly, also, the PSNR and SSIM of it rise the fastest.
We report the average PSNR and SSIM of the restored images in Table 1 and Table 2, where Table 1 corresponds to the results on average blur, and Table 2 on motion blur.   The best results are in bold font. It can be observed that the PSNRs and SSIMs of the proposed method are higher than that of other methods. Taking the average blur 7 × 7 as an example, in terms of PSNR, our method Lp-BM3D improves L 2 -NLM, WL 1 -NLM, WL 1 -BM3D, PnP-NLM, PnP-NLM around 1.30dB, 0.30dB, 0.55dB, 0.35dB and 0.40dB, respectively. In terms of SSIM, the proposed Lp-BM3D improves these methods around 0.09, 0.05, 0.03, 0.06 and 0.04. These results confirm that, by introducing sparse penalty on the method noise, the proposed model performs better than others in the aspect of recoving useful image information.
For visual assessment, we show the restored images of Lena in Figs.7 for average blur, and Figs.8 for motion blur. The size of the average blur mask is 5 × 5, and the parameters of the motion blur are L = 5, θ = 10 • . Using either NLM or BM3D as the image smoothing operator, the proposed L p -denoiser obtains better results than L 2 -NLM, PnP-denoiser and WL 1 -denoiser. For example, the image Fig.7(h) restored by our method L p -BM3D is visually much better than the image Fig.7(d) restored by WL1 -BM3D, and the image Fig.7(f) restored by PnP-BM3D. In the smooth areas of the image in figure Fig.7 (h), one can notice that the noise is removed better than that in figure Fig.7 (d) and Fig.7 (f). Moreover, the brim (marked in yellow box) looks more natural than that obtained by other methods. Similarly, the image Fig.8 (g) and Fig.8 (h) restored better results than that of others.

2) IMAGE INPAINTING
Now we consider the image restoration on an image with missing pixels. For a binary mark H , the indices where the entries are zeros, represent the locations of missing pixels. In this group of experiments, each of an incomplete image is presented in the form of 50% random missing pixels. Parameters are empirically fixed as α = 0.8 × 10 −2 , β = 1, and γ = 1.1.
We report the variation trends of NMSE, PSNR and SSIM, respectively, see as in Fig.9. From the results, it can be found that the NMSE of L p -denoiser methods, including L p -NLM L p -BM3D, decay rapidly than others, and the PSNR and SSIM of them increase faster than others. Table 3 reports the experimental results on all test images. From the data, it is clear that the L p -denoiser has superior performance than L 2 -NLM, WL 1 -denoiser, and PnP-denoiser. In terms of PSNR, L p -BM3D improves L 2 -NLM, WL 1 -NLM, WL 1 -BM3D, PnP-NLM, PnP-BM3D around 1.0dB, 0.35dB, 0.30dB, 0.50dB and 0.40dB, respectively. In terms of SSIM, the proposed L p -BM3D improves these methods around 0.06, 0.05, 0.02, 0.04 and 0.03.
In Fig.10, we show the restored images of Lena for inpainting. Either using NLM or BM3D as the denoiser, our L p -denoiser method can obtain better results than L 2 -NLM, PnP-denoiser, and WL 1 -denoiser. For example, the image ( Fig.10 (h)) restored by our method L 1 -BM3D is visually much better than the images restored by PnP-BM3D ( Fig.10 (d)) and WL 1 -BM3D (Fig.10 (f)). Especially, the brim (marked in yellow box) looks more natural than that of other methods.

V. CONCLUSION
The character of method noise is distinctive. It mainly contains the edge, small scaled details of the image, and the corresponding histogram is sparse. The L p (0 < p < 1) norm can better describe the sparsity. Therefore the proposed model with regularization function u − D τ (u) p p is well-motivated and reasonable though its convexity is unclear. Furthermore, we design a modified BOS algorithm in which parameters are adaptive updated to ensure the downward trend of the residual error. Finally, we evaluate the proposed algorithm by experiments. The results show that the proposed L p -denoiser method has superior performance and gets better visual effects than L 2 -NLM, L 1 -denoiser and PnP-denoiser algorithm. In our future work, we will strengthen theoretical analysis on the convergence and stability of the algorithm, together with the study on the parameter learning methods. His current research interests include optimization algorithms, system engineering, and optimal control. XUDONG WANG received the B.S. degree in mathematics from Shandong Normal University, China, in 1997, the M.S. degree in applied mathematics from the Guilin University of Electronic Technology, in 2007, and the Ph.D. degree in applied mathematics from Xidian University, China, in 2013. He is currently an Associate Researcher with the School of Computer and Information Engineering, Nanning Normal University, China. His current research interests include image processing and analysis and data processing. VOLUME 8, 2020