CNN-Based Projected Gradient Descent for Consistent CT Image Reconstruction

We present a new image reconstruction method that replaces the projector in a projected gradient descent (PGD) with a convolutional neural network (CNN). Recently, CNNs trained as image-to-image regressors have been successfully used to solve inverse problems in imaging. However, unlike existing iterative image reconstruction algorithms, these CNN-based approaches usually lack a feedback mechanism to enforce that the reconstructed image is consistent with the measurements. We propose a relaxed version of PGD wherein gradient descent enforces measurement consistency, while a CNN recursively projects the solution closer to the space of desired reconstruction images. We show that this algorithm is guaranteed to converge and, under certain conditions, converges to a local minimum of a non-convex inverse problem. Finally, we propose a simple scheme to train the CNN to act like a projector. Our experiments on sparse-view computed-tomography reconstruction show an improvement over total variation-based regularization, dictionary learning, and a state-of-the-art deep learning-based direct reconstruction technique.


I. INTRODUCTION
While medical imaging is a fairly mature area, there is recent evidence that it may still be possible to reduce the radiation dose and/or speedup the acquisition process without compromising image quality [1].This can be accomplished with the help of sophisticated reconstruction algorithms that incorporate some prior knowledge (e.g., sparsity) on the class of underlying images.The reconstruction task is usually formulated as an inverse problem, where the image-formation physics is modeled by an operator H : R N → R M (called the forward model).The measurement equation is y = Hx + n ∈ R M , where x is the space-domain image that we are interested in recovering and n ∈ R M is the noise intrinsic to the acquisition process.
In the case of extreme imaging, the number and the quality of the measurements are both reduced as much as possible, e.g., in order to decrease either the scanning time in MRI or the radiation dose in CT.Moreover, the measurements are typically very noisy due to short integration times, which calls for some form of denoising.Indeed, there may be significantly fewer measurements than the number of unknowns Authors are with the Biomedical Imaging Group, EPFL, Lausanne, Switzerland.This project is funded by H2020-ERC, grant agreement No. 692726 -GlobalBioIm.
This manuscript was submitted to "IEEE Transactions on Medical Imaging", for the special edition, "Machine Learning for Image Reconstruction", on 30 Aug. 2017.
(M << N ).This gives rise to an ill-posed problem in the sense that there may be an infinity of consistent images that map to the same measurements y.Thus, one challenge of the reconstruction algorithm is essentially to select the "best" solution among a multitude of potential candidates.
The available reconstruction algorithms can be broadly arranged in three categories (or generations), which represent the continued efforts of the research community to address the aforementioned challenges.
1) Classical Algorithms: Here, the reconstruction is performed directly by applying a suitable linear operator.This may be the backprojection (BP) H T y or the filtered backprojection (FBP) FH T y, where F : R N → R N is a regularized version of (H T H) −1 . 1 FBP-type algorithms are fast; they provide excellent results when the number of measurements are sufficient and the noise is small [2].However, they are not suitable for extreme imaging scenarios because they introduce artifacts intimately connected to the inversion step.
2) Iterative Algorithms: These algorithms avoid the shortcomings of the classical ones by solving x * = arg min x (E(Hx, y) + λR(x)), where E : R M × R M → R + is a data-fidelity term that favors solutions that are consistent with the measurements, R : R N → R + is a suitable regularizer that encodes the prior knowledge about the image x to be reconstructed, and λ ∈ R + is a tradeoff parameter.The quantity y * = Hx * , where x * is the solution of (1), can be interpreted as the denoised version of y.Under the assumption that the functionals E and R are convex, one can show that the solution of (1) also satisfies Therefore, among all potential solutions that admit the denoised measurement y * , the algorithm picks the one with the least R.This shows that the quality of the reconstruction depends heavily on the prior encoder R. Generally, these priors are either handpicked (e.g., total variation (TV) or the 1 -norm of the wavelet coefficients of the image [1], [3]- [6]) or learned through a dictionary [7]- [9].However, in either case, they are restricted to well-behaved functionals that can be minimized via a convex routine [10]- [13].This limits the type of prior knowledge that can be injected into the algorithm.An interesting variant within the class of iterative algorithms is "plug-and-play ADMM" [14].We recall that ADMM is an 1 Or, F can be a regularized version of (HH T ) −1 and be applied as H T F, as is often the case in CT.

arXiv:1709.01809v1 [cs.CV] 6 Sep 2017
iterative optimization technique [13], which repeatedly alternates between: (i) a linear solver that reinforces consistency w.r.t.measurements; (ii) a nonlinear operation that re-injects the prior.Interestingly, the effect of (ii) is akin to denoising.This perspective has resulted in a scheme where an offthe-shelf denoiser is plugged into the later step [14]- [18].This scheme, therefore, is more general than the optimization framework (1) but still lacks theoretical justifications.In fact, there is no good understanding yet of the connection between the use of a given denoiser and the regularization it imposes.
3) Learning-based Algorithms: Recently, there has been a surge in using deep learning to solve inverse problems in imaging [19]- [23], establishing new state-of-the-art results for tasks like sparse-view CT reconstruction [19].These approaches use the convolutional neural network (CNN) as a regressor.But, rather than attempting to reconstruct the image from the measurements y directly, the most successful strategies so far have been to train the CNN as a regressor between rogue initial reconstruction Ay, where A : R M → R N , and the final, desired reconstruction [19], [20].This initial reconstruction could be obtained by FBP (FH T y), BP (H T y) or by any other linear operation.
Once the training is complete, the reconstruction for an unseen measurement y is given by x * = CNN θ * (Ay), where CNN θ : R N → R N denotes the CNN as a function and θ * denotes the internal parameters of the CNN after training.
These schemes exploit the fact that the structure of images can be learnt from representative examples.CNNs are favored because of the way they encode/represent the data in their hidden layers.In this sense, a CNN can be seen as a good prior encoder.Although the results reported so far are remarkable in terms of image quality, there is still some concern on whether or not they can be trusted, especially in the context of diagnostic imaging.The main limitation of direct algorithms such as [19] is that they do not provide any worst case performance guarantee.Moreover, even in the case of noiseless (or low noise) measurements, there is no insurance that the reconstructed image is consistent with the measurements.This is not overly surprising because, unlike the iterative schemes, there is no feedback mechanism that imposes this minimal requirement.

A. Contribution
We propose a simple yet effective iterative scheme which tries to incorporate the advantages of the existing algorithms and side-steps their disadvantages (see Figure 1).Moreover, it outperforms the existing algorithms to reconstruct biomedical images from their sparse-view CT measurements.Specifically, • We initialize our reconstruction using a classical algorithm.
• We learn a CNN that acts as a projector onto a set S which can be intuitively thought of as the manifold of the data (e.g., biomedical images).In this sense, our CNN encodes the prior knowledge of the data.Its purpose is to map an input image to an output image that enjoys a more structural fit to the training data than the input image.
(a) Projected gradient descent • Similarly to variational methods, we iteratively alternate between minimizing the data-fidelity term and projecting the result onto the set S by applying a suitable variant of the projected gradient descent (PGD) which ensures convergence.This scheme in spirit is similar to plug-and-play ADMM but is simpler to analyze.In this way, instead of performing the reconstruction by a feedbackless pipeline, we perform it by iteratively enforcing measurement consistency and injecting prior knowledge.

B. Roadmap for the Paper
The paper is organized as follows: In Section II, we discuss the formal framework that motivates our approach.We mathematically justify the use of a projector onto a set as an effective strategy to solve inverse problems.In Section III, we present an iterative algorithm inspired from PGD.It has been modified so as to converge in practical cases where the projection property is only approximate.This is crucial because, although we train the CNN as a projector using a training set, there is no guarantee that it will act like a projector for an unseen data.We discuss in Section IV a novel technique to train the CNN as a projector onto a set, especially when the training data is small (e.g., around 500 images in our case).This is followed by results and conclusion.

C. Related and Prior Work
Deep learning has shown promising results in the cases of image denoising, superresolution and deconvolution.Recently, it has also been used to solve inverse problems in imaging using limited data [19]- [22], and in compressed sensing [24].However, as discussed earlier, these regression based approaches lack the feedback mechanism that can be beneficial to solve the inverse problems.
The other usage of deep learning is to complement the iterative algorithms.This includes learning a CNN as an unrolled version of ISTA [25] and ADMM [26].In [27], inverse problems including non-linear forward models are solved by partially learning the gradient descent.In [28] the iterative algorithm is replaced by a recurrent neural network (RNN).In all of these approaches the training depends entirely on the iterative scheme the neural network is used with.
On the other hand, [14]- [18] use plug-and-play ADMM that uses a denoiser which is learnt independently of the iterative scheme.In [17], a generative adversarial network (GAN) trained as a projector onto a set is plugged into the plug-and-play ADMM and is used to solve arbitrary linear inverse problem.However, due to the adversarial nature, the training is complicated and requires extremely large datasets (around 8 million images).Performing such training would be challenging in biomedical imaging applications because of the lack of large datasets and the high-resolution of the images (512 × 512 or more).Also, in many cases, the performance of [17] is worse than the regression-based deep-learning methods specialized for a given inverse problem.

II. THEORETICAL FRAMEWORK
The central theme of this paper is to use CNN iteratively with PGD to solve inverse problem.The task of the CNN is to act like a projector.It projects the input image to the space of desired images.To understand why this scheme will be effective, we first analyze how using a projector onto a set, combined with gradient descent, can be helpful in solving inverse problems.Proofs of all the theoretical results except Theorem 3 can be found in the supplementary material.

A. Notation
We consider the finite-dimensional Hilbert space R N equipped with the scalar product • , • that induces the 2 norm • 2 .The spectral norm of the matrix H, denoted by H 2 , is equal to its largest singular value.For x ∈ R N and ε > 0, we denote by B ε (x) the 2 -ball centered at x with radius ε, i.e., Given a set S ⊂ R N , a mapping P S : R N → S is called a projector if it satisfies the idempotent property P S P S = P S .It is called an orthogonal projector if

B. Constrained Least Squares
Consider the problem of reconstructing an image x ∈ R N from its noisy measurements y = Hx+n, where H ∈ R M ×N is the linear forward model and n ∈ R M is additive white Gaussian noise.Our reconstruction incorporates a strong form of prior knowledge about the original image: We assume that x must lie in some set S ⊂ R N that contains all objects of interest.The proposed way to make the reconstruction consistent with the measurements as well as with the prior knowledge is to solve the constrained least-squares problem The condition x ∈ S in (3) plays the role of a regularizer.If no two points in S have the same measurements and incase y is noiseless, then out of all the points in R N that are consistent with the measurement y, (3) selects a unique point x * ∈ S. In this way, it removes the ill-posedness of the inverse problem.When the measurements are noisy, (3) returns a point x * ∈ S such that y * = Hx * is as close as possible to y.Thus, it also denoises the measurement, where the quantity y * can be regarded as the denoised version of y.
It is remarkable that ( 3) is a generalized formulation of the regularization schemes in (1), which can be rewritten as where S R = {x ∈ R N : R(x) ≤ τ } for some unique τ dependent on the regularization parameter λ.

C. Projected Gradient Descent
When S is a closed convex set, it is well known [29] that a solution of (3) can be found by PGD where γ is a step size chosen such that γ < 2/ H T H 2 .This algorithm combines the orthogonal projection onto S with the gradient descent w.r.t. the quadratic objective function (also called the Landweber update [30]).PGD [31, Sec.2.3] is a subclass of the forward-backward splitting [32], [33], which is known in the 1 -minimization literature as Iterative Shrinkage/Thresholding Algorithms (ISTA) [10], [11], [34].
In our problem, S is presumably non-convex, but we propose to still use the update (4) with some projector P S that may not be orthogonal.In the rest of this section, we provide sufficient conditions on the projector P S (not on S itself) under which (4) leads to a local minimizer of (3).Similarly to the convex case, we characterize the local minimizers of (3) by the fixed points of the combined operator and then show that some fixed point of that operator must be reached by the iteration x k+1 = G γ (x k ) as k → ∞, no matter the value of x 0 .We first state a sufficient condition for each fixed point of G γ to become a local minimizer of (3).
Proposition 1.Let γ > 0 and P S be such that, for all x ∈ R N , for some ε > 0.Then, any fixed point of the operator G γ in (5) is a local minimizer of (3).Furthermore, if (6) is satisfied globally, in the sense that then any fixed point of G γ is a solution of (3).
Two remarks are in order.First, ( 7) is a well-known property of orthogonal projections onto closed convex sets.It actually implies the convexity of S (see Proposition 2).Second, ( 6) is much more relaxed and easily achievable, for example, as stated in Proposition 3, by orthogonal projections onto unions of closed convex sets (special cases are unions of subspaces, which have found some applications in data modeling and clustering [35]).
Proposition 2. If P S is a projector onto S ⊂ R N that satisfies (7), then S must be convex.Proposition 3. If S is a union of a finite number of closed convex sets in R N , then the orthogonal projector P S onto S satisfies (6).
The above results suggest that, when S is non-convex, the best we can hope for is to find a local minimizer of (3) through a fixed point of G γ .Theorem 1 provides a sufficient condition for PGD to converge to a unique fixed point of G γ .
It is important to note that the projector P S can never be contractive since it preserves the distance between any two points on S. Therefore, when H has a nontrivial null space, the condition L < (λ max +λ min )/(λ max −λ min ) of Theorem 1 is not feasible.The smallest possible Lipschitz constant of P S is L = 1, which means P S is non-expansive.Even with this condition, it is not guaranteed that the combined operator F γ has a fixed point.This limitation can be overcome when F γ is assumed to have a nonempty set of fixed points.Indeed, we state in Theorem 2 that one of them must be reached by iterating the averaged operator α Id +(1 − α)G γ , where α ∈ (0, 1) and Id is the identity operator.
Theorem 2. Let λ max be the largest eigenvalue of H T H.If P S satisfies (6) and is a non-expansive operator such that G γ in (5) has a fixed point for some γ < 2/λ max , then the sequence {x k } generated by for any α ∈ (0, 1), converges to a local minimizer of (3), regardless of the initialization x 0 .

III. RELAXATION WITH GUARANTEED CONVERGENCE
Despite their elegance, Theorems 1 and 2 are not directly useful when we construct the projector P S by training a CNN, because it is unclear how to enforce the Lipschitz Algorithm 1 Relaxed projected gradient descent (RPGD) end while continuity of P S on the CNN architecture.Without putting any constraints on the CNN, however, we can still achieve the convergence of the reconstruction sequence by modifying PGD as described in Algorithm 1; we name it relaxed projected gradient descent (RPGD).In the proposed algorithm, the projector P S is replaced with a general nonlinear operator F .We also introduce a sequence {c k } that governs the rate of convergence of the algorithm and a sequence {α k } of relaxation parameters that evolves with the algorithm.The convergence of RPGD is guaranteed by Theorem 3.More importantly, if the nonlinear operator F is actually a projector and the relaxation parameters do not go all the way to 0, then RPGD converges to a meaningful point.Theorem 3. Let the input sequence {c k } of Algorithm 1 be asymptotically upper-bounded by C < 1.Then, the following statements hold true for the reconstruction sequence {x k }: (iii) if, in addition to (ii), F is indeed a projector onto S that satisfies (6), then x * is a local minimizer of (3).
We prove Theorem 3 in Appendix A. Note that the weakest statement here is (i); it always guarantees the convergence of RPGD, albeit not necessarily to a fixed point of G γ .Moreover, the assumption about the continuity of F in (ii) is automatically true when F is a CNN.

IV. TRAINING CNN AS A PROJECTOR
With these theoretical foundations in place, we move on to the matter of training a CNN to act as the projector in RPGD (Algorithm 1).For any point x ∈ S, a projector onto S should satisfy P S x = x.Moreover, we want where x is any perturbed version of x.Given a training set, {x 1 , . . ., x Q }, of points drawn from the set S, we generate an ensemble of N × Q perturbed points, {{x 1,1 , . . ., xQ,1 }, . . ., {x 1,N . . ., xQ,N }} and train the CNN by minimizing the loss function Jn(θ) .
The optimization proceeds by stochastic gradient descent for T epochs, where an epoch is defined as one pass though the training data.
It remains to select the perturbations that generate the x q,n .Our goal here is to create a diverse set of perturbations so that the CNN does not overfit one specific type.In our experiments, while training for the t-th epoch, we chose xq,1 = x q :No perturbation (12) xq,2 = AHx q :Specific linear perturbation ( 13) where A is a classical linear reconstruction algorithm like FBP or BP, H is the forward model, and θ t are the CNN parameters after t epochs.We now comment on each of these perturbations in detail.Keeping xq,1 in the training ensemble will train the CNN with the defining property of the projector: the projector maps a point in the set S onto itself.If the CNN were only trained with (12), it would be an autoencoder [36].
To understand the perturbation xq,2 in (13), recall that AHx q is a classical linear reconstruction of x q from its measurement y = Hx q .This perturbation is useful because we will initialize RPGD with AHx q .Using only (13) for training will return the same CNN as in [19].
The perturbation xq,3 in ( 14) is the output of the CNN whose parameters θ t change with every epoch t, thus it is a non-linear and dynamic (epoch-dependent) perturbation of x q .The rationale for using (14) is that it greatly increases the training diversity (allowing the network to see T new perturbations of each training point) without greatly increasing the total training size (only requiring an additional Q gradient computations per epoch).Moreover, ( 14) is in sync with the iterative scheme of RPGD, where the output of the CNN is processed with a gradient descent and is again fed into itself.

A. Architecture
The architecture we use is the same as in [19], which is a U-net [37] with intrinsic skip connections among its layers and an extrinsic skip connection between the input and the output.The intrinsic skip connections help to eliminate singularities during the training [38].The extrinsic skip connections make this network a residual net; i.e., CNN = Id +U net, where Id denotes the identity operator and U net : R N → R N denotes the Unet as a function.The U-net therefore actually provides the projection error (negative perturbation) that should be added to the input to get the projection.Residual nets have been shown to be effective in the image recognition [39] and inverse problem cases [19].While the residual net architecture does not increase the capacity or the approximation power of the CNN, it does help in learning functions that are close to an identity operator, as is the case in our setting.

B. Sequential Training Strategy
We train the CNN in 3 stages.In stage 1, we train it for T 1 epochs w.r.t. the loss function J 2 which only uses the ensemble {x q,2 } generated by (13).In stage 2, we add the ensemble {x q,3 } according to (14) at every epoch and then train the CNN w.r.t. the loss function J 2 + J 3 ; we repeat this procedure for T 2 epochs.Finally, in stage 3, we train the CNN for T 3 epochs with all the ensembles {x q,1 , xq,2 , xq,3 } to minimize the original loss function J = J 1 + J 2 + J 3 from (11).
The above sequential procedure helps speed up the training.The training with {x q,1 } is initially bypassed with using the residual net, which is close to the identity operator.It is only incorporated in the last few epochs of stage 3.After training with only {x q,2 } in stage 1, xq,3 will be close to x q , since it is the output of the CNN for the input xq,2 .This will ease the training with {x q,3 }, which is added after stage 1.

V. EXPERIMENTS
We validate our proposed method on the difficult case of sparse-view CT reconstruction with low dosage exposure.The measurement operator H is now the Radon transform.It maps an image to the values of its integrals along a known set of lines [40].In 2D, these measurements can be indexed by the angles and offsets of the lines and arranged in a 2D sinogram.We are particularly interested in the case where the total number of measurements is smaller than the number of pixels in the reconstruction.For example, we aim to reconstruct a (512 × 512) image from 45 angles, each with 729 offsets sinogram; i.e., to reconstruct x ∈ R 512×512 from y ∈ R 45×729 .This corresponds to about 8 times fewer measurements than the image to be reconstructed.

A. Dataset
Our dataset consists of clinically realistic invivo (512×512) CT scans of human abdomen from Mayo clinic for the AAPM Low Dose CT Grand Challenge [41].This data includes CT scans of 10 patients obtained using full dose.We use 475 images from 9 patients for training and 25 images from the other patient for testing.This is the same data used in [19].These images serve as the ground truth.
From these images, we generate the measurements (sinograms) using the radon command in Matlab, which corresponds to the forward model H.The sinograms always have 729 offsets per view, but we vary the number of views in different experiments.Our task is to reconstruct these images from their sparse-view sinograms.We take 2 scenarios: 144 views and 45 views, which corresponds to ×5 and ×16 dosage reductions (assuming a full-view sinogram has 720 views).
The backprojection H T is implemented via the iradon command with a normalization to satisfy the adjoint property.
To make the experiments more realistic and to reduce the inverse crime, the sinograms are generated by perturbing the angles of the views slightly by adding a zero-mean additive white Gaussian noise (AWGN) with standard deviation of 0.05 degrees.This creates a slight mismatch between the actual measurement process and the forward model H.We also add various amounts of zero-mean Gaussian noise to the sinograms.The SNR of the sinogram y + n is defined as Given the ground truth x, our figure of merit for the reconstructed x * is the regressed SNR, given by where the scalars a and b serve to scale the data and remove any DC offset, which can greatly affect the SNR but are of little practical importance.

B. Comparison Methods
We compare four reconstruction methods and report the SNRs for all of them. 1) FBP is the classical direct inversion of the Radon transform H, here implemented in Matlab by the iradon command with the ram-lak filter and linear interpolation as options.2) TV solves The optimization is carried out via ADMM [13].For a given testing image the parameter λ is tuned so as to maximize the SNR of the reconstruction.3) FBPconv is the deep-learning-based regression technique [19] that corresponds to a CNN trained with only the ensemble in (13).In the testing phase, the FBP of the measurements is fed into the trained CNN to output the reconstruction image.4) RPGD is our proposed method which is described in Algorithm 1 where the nonlinear operator F is the CNN trained as a projector (as discussed in section IV).

C. Training and Selection of Parameters
We now describe how training and/or parameter selection occurred for the reconstruction methods.FBP has no free hyperparameters.For TV, we chose λ by a grid search through 20 values for each test image.While carrying the optimization with ADMM we put the penalty term, ρ = λ.The rationale for this heuristic is that the soft-threshold parameter is of the same order of magnitude as the image gradients.We set the number of iterations to 100, which was enough to show good empirical convergence.
As discussed in section IV, the CNNs for RPGD is trained in 3 stages, with the following configurations: We used the CNN obtained right after the first stage for FBPconv, since during this stage only the training ensemble in ( 13) was taken into account.We empirically found that the training error J 2 converged in T 1 epochs of stage 1, yielding an optimal performance for FBPconv.
In addition, we also trained the CNNs w.r.t.40 dB noise level in the measurements, by replacing the ensemble in (13) with {Ay q } where y q = Hx q + n.With 20% probability, we also perturbed the views of the measurements with AWGN of 0.05 standard deviation so as to enforce robustness to model mismatch.These CNNs were initialized with the ones obtained after the first stage of the noiseless training and were then trained with the following configurations: Similarly to the noiseless case, the CNNs obtained after the first and the third stage of the above training were used in FBPconv and RPGD, respectively.For clarity, these variants will be referred to as FBPconv40 and RPGD40.
During the training, the learning rate was logarithmically decreased from 10 −2 to 10 −3 in stage 1 and kept at 10 −3 for stages 2 and 3.The batch size was fixed to 2, the gradient above 10 −2 was clipped and the momentum was set to 0.99.The total training time for the noiseless case was around 21.5 hours on GPU Titan X (Pascal architecture).
The hyper-parameters for RPGD were chosen as follows: The relaxation parameter α 0 was initialized with 1, the sequence {c k } was set to a constant C where C = 0.99 for RPGD and C = 0.8 for RPGD40.For each noise and dosage reduction level, the only free parameter, γ, was tuned to give the best average SNR over 25 test images.In all experiments, the gradient step was removed from the first iteration.On the GPU, one iteration of RPGD takes less than 1 second.The algorithm is stopped when the residual x k+1 − x k 2 reaches a value less than 1, which is sufficiently small compared to the dynamic range [0, 350] of the image.It takes around 1-2 minutes to reconstruct an image with RPGD.

VI. RESULTS
Tables I and II report the results of various methods for low and high measurement noise, respectively.FBPconv and RPGD are used for low noise, while FBPconv40 and RPGD40 are used for high noise.The reconstruction SNRs are averaged over the 25 test images.
In the low-noise cases (Table I), the proposed method, RPGD, outperforms all the others for both ×5 and ×16  reductions.FBP performs the worst but is able to retain enough information to be utilized by FBPConv and RPGD.Thanks to the convexity of the iterative scheme, TV is able to perform well but tends to smooth textures and edges.On the other hand, FBPConv outperforms TV.However, it is outperformed by RPGD.This is mainly due to the feedback mechanism in RPGD which lets RPGD use the information in the given measurements to increase the quality of the reconstruction.In fact, for the ×16, no noise case, the SNRs of the sinogram of reconstructed images for TV, FBPconv, and RPGD are around 47 dB, 57 dB, and 62 dB, respectively.This means that not only reconstruction using RPGD has better image quality but is also more reliable since it is consistent with the given noiseless measurement.
In the noisier cases (Table II), RPGD40 outperforms the other methods in low-view cases (×16) and is more consistent in performance than the others in high-view (×5) cases.FBPconv40 substantially outperforms TV in the two scenarios with 40-dB noise measurement, over which it was actually trained.However, as the level of noise deviates from 40  dB, the performance of FBPconv40 degrades significantly.Surprisingly, its performances in the 45-dB cases are much worse than those in the corresponding 40-dB cases.This implies that FBPConv is highly sensitive to the difference between the training and the testing conditions.By contrast, RPGD40 is more robust to this difference due to its iterative correction.In fact, for ×16 case with 45-dB and 35-dB noise level, it outperforms FBPconv40 by around 3.5 dB and 6 dB, respectively.
Fig. 2 (a) illustrates the reconstructions of a test image for ×16 case when measurement is noiseless.FBP is dominated by line artifacts, while TV satisfactorily removes these but blurs the fine structures.FBPConv and RPGD, on the other hand, are able to reconstruct these details.The zoomed version shows that RPGD is able to reconstruct the fine details better than the others.This observation remains the same when the measurement quality degrades.Fig. 2 (b) shows the reconstructions for 45-dB and 40-dB noise levels.In these scenarios, RPGD40 is significantly better than both FBPconv40 and TV.Due to the high value of the step size (γ = 2 × 10 −3 ) and the large difference Hx k − y, the initial few iterations have large gradients resulting in the instability of the algorithm.The reason is that the CNN is fed with x k −γH T (Hx k −y) which is drastically different from the perturbations on which it was trained.In this situation, α k decreases steeply and stabilizes the algorithm.At convergence, α k = 0, therefore, according to Theorem 3, x 100 is the fixed point of ( 9) where F = CNN .

VII. CONCLUSION
We have proposed a simple yet effective iterative scheme (RPGD) where one step of enforcing measurement consistency is followed by a CNN which tries to project the solution onto the set of the data that we are interested in.The whole scheme is ensured to be convergent.We also introduced a novel method to train a CNN that acts like a projector using a reasonably small dataset (475 images).For sparseview CT reconstruction our method outperforms the previous techniques for both noiseless and noisy measurements.
The proposed framework can be used to solve other inverse problems like super-resolution, deconvolution, accelerated MRI, etc.This can bring more robustness and reliability to the current deep learning based techniques.

APPENDIX
On the other hand, from the construction of {α k }, Iterating (19) gives We now show that {x k } is a Cauchy sequence.Since {c k } is asymptotically upper-bounded by C < 1, there exists K such that c k ≤ C, ∀k > K. Let m, n be two integers such that m > n > K.By using (20) and the triangle inequality, The last inequality proves that x m − x n 2 → 0 as m → ∞, n → ∞, or {x k } is a Cauchy sequence in the complete metric space R N .As a consequence, {x k } must converge to some point x * ∈ R N .
(ii) Assume from now on that {α k } is lower-bounded by ε > 0. By definition, {α k } is also non-increasing and, thus, convergent to α * > 0. Next, we rewrite the update of x k in Algorithm 1 as where G γ is defined by (9).Taking the limit of both sides of ( 22) leads to Moreover, since the nonlinear operator F is continuous, G γ is also continuous.Hence, By plugging ( 24) into ( 23), we get that x * = G γ (x * ), which means x * is a fixed point of the operator G γ .
(iii) Now that F = P S satisfies (6), we invoke Proposition 1 to infer that x * is a local minimizer of (3), thus completing the proof.

A. Proof of Proposition 1
Suppose that ( 6) is fulfilled and let x * ∈ S be a fixed point of G γ .We show that x * is also a local minimizer of (3).Indeed, setting x = x * − γH T Hx * + γH T y leads to P S x = x * .Then, there exists ε > 0 such that, for all z ∈ S ∩ B ε (x * ), Since γ > 0, the last inequality implies that , ∀z ∈ S ∩ B ε (x * ), which means that x * is a local minimizer of (3).
Assume now that P S satisfies (7).By just removing the ε-ball in the above argument, one easily verifies that Hx * − y

B. Proof of Proposition 2
We prove by contradiction.Assuming that S is non-convex, there must exist x 1 , x 2 ∈ S and α ∈ (0, 1) such that x = αx 1 + (1 − α)x 2 / ∈ S. Since P S x ∈ S, it must be that 0 < x − P S x Thus, there exists i ∈ {0, 1} such that x i − P S x , x − P S x > 0, which violates (7).So, S is convex.

C. Proof of Proposition 3
Suppose that S = n i=1 C i , where C i is a closed convex set for all i = 1, . . ., n.The statement is trivial when n = 1; assume now that n ≥ 2. Let x ∈ R N and x be the orthogonal projection of x onto S. Consider two cases.
Case 1: This means that x is the orthogonal projection of x onto each C i .Consequently, z − x , x − x ≤ 0, ∀z ∈ C i , ∀i ≤ n, which implies that (6) holds true for all ε > 0.
Case 2: x / ∈ n i=1 C i .Without loss of generality, there exists m < n such that Let d be the distance from x to the set T = n i=m+1 C i .Since each C i is closed, T must be closed too and, so, d > 0. We now choose 0 < ε < d.Then, B ε (x) ∩ T = ∅.Therefore, where Ci = C i ∩ B ε (x) is clearly a convex set, for all i ≤ m.It is straightforward that x is the orthogonal projection of x onto the set From ( 26) and ( 27), ( 6) is fulfilled for the chosen ε.

D. Proof of Theorem 1
Let {λ i } denote the set of eigenvalues of H T H.We first have that, for all x ∈ R N , where the spectral norm of I − γH T H is given by On the other hand, choosing γ = 2/(λ max + λ min ) yields By combining (28), (29), and (30), x − γH T Hx 2 ≤ λ max − λ min λ max + λ min x 2 , ∀x.

E. Proof of Theorem 2
Again, let {λ i } be the set of eigenvalues of H T H.With γ < 2/λ max , one readily verifies that, ∀x ∈ R N , Combining this with the non-expansiveness of P S leads to Now that G γ is a non-expansive operator with a nonempty fixed-point set, we invoke the Krasnosel'skiȋ-Mann theorem [42,Thm. 5.14] to deduce that the iteration (8) must converge to a fixed point of G γ which is, by Proposition 1, also a local minimizer of (3).

Fig. 1 .
Fig. 1.(a) Block diagram of projected gradient descent using a CNN as the projector.The gradient step w.r.t. the data-fidelity term E = Hx − y 2 , promotes consistency with the measurements and the projector forces the solution to belong to the set of desired solutions.If the CNN is only an approximate projector, the scheme may diverge.(b) Block diagram of the proposed relaxed projected gradient descent.The α k s are updated in such a way that the algorithm always converges (see Algorithm 1 for more details).

Fig. 2 .
Fig. 2. (a) Reconstructions of a test image from noiseless measurements with 45 views (×16 dosage reduction) using FBP, TV, FBPconv, and RPGD (proposed): first row shows the full images, second row shows their zoomed versions.(b) Zoomed reconstructions of the same image when the measurement is noisy with 45 dB (first row) and 40 dB (second row) SNR.In these cases, FBPconv and RPGD are replaced by FBPconv40 and RPGD40, respectively.

Fig. 3 .
Fig. 3. Reconstructions of a test image from noisy measurements with 144 views (×5 dosage reduction) using FBP, TV, FBPconv40, and RPGD40 (proposed).First row shows the results for measurement noise with SNR = 45 dB; second row shows their zoomed version.Third row shows the zoomed results for measurement noise with SNR = 35 dB.

Fig. 4 .
Fig. 4. Convergence with iteration k of RPGD for the ×16, no-noise case when C = 0.99.Results are averaged over 25 test images.(a) SNRs of x k w.r.t. the ground-truth image.(b) SNRs of Hx k w.r.t. the ground-truth sinogram.(c) Evolution of the relaxation parameters α k .

Fig. 3
Fig. 3 compares the reconstructions for the ×5 case when the noise levels are 45 dB and 35 dB.It is visible that FBPconv40 results in a noisy image and TV is again blurred.

2 2 =
x − P S x , x − P S x = α x 1 − P S x , x − P S x + (1 − α) x 2 − P S x , x − P S x .

mi=1
Ci and that x ∈ m i=1 Ci .We are back to Case 1 and, therefore,z − x , x − x ≤ 0, ∀z ∈ Ci , ∀i ≤ m.

TABLE I AVERAGED
RECONSTRUCTION SNRS OVER 25 IMAGES FOR 16 AND 5 TIMES DOSAGE REDUCTIONS WITH LOW MEASUREMENT NOISE.

TABLE II AVERAGED
RECONSTRUCTION SNRS OVER 25 IMAGES FOR 16 AND 5 TIMES DOSAGE REDUCTIONS WITH HIGH MEASUREMENT NOISE.