On the Design of Weakly-Convex Regularizers for Solving Linear Inverse Problems | IEEE Conference Publication | IEEE Xplore

On the Design of Weakly-Convex Regularizers for Solving Linear Inverse Problems


Abstract:

Linear inverse problems are ubiquitous in signal processing and computational imaging. The prototypical problem is to recover a signal from noisy linear measurements. A t...Show More

Abstract:

Linear inverse problems are ubiquitous in signal processing and computational imaging. The prototypical problem is to recover a signal from noisy linear measurements. A typical optimization-based approach is to minimize the sum of a data-fidelity loss and a regularization function. The data-fidelity function ensures consistency with the measurements and the regularization function imparts desired properties to the solution. Convex regularization functions are typically preferred as one can provide theoretical guarantees. However, the convex-nonconvex (CNC) framework, which employs a convex data-fidelity loss and a nonconvex regularization function has been shown to be superior in terms of the quality of signal recovery. In this paper, we consider model-based and data-driven nonconvex regularization objectives to solve linear inverse problems. We consider the denoising problem and propose a constructive approach to design weakly convex regularization functions by minimizing a measure of maximum concavity. Our design approach captures known model-based regularization functions, including those that promote sparsity, and also includes learnable convolutional neural networks. Minimization of the objective follows first-order gradient-based methods. Our approach ensures that the overall reconstruction technique is provably convergent. We show that it outperforms state-of-the-art model-based techniques and is comparable to the benchmark learningbased methods. Crucially, our technique results in reconstructions with fewer artifacts compared to the state-of-the-art learning-based methods. Our reconstruction approach reduces the number of network parameters to be learnt for similar neural network architectures making it easier/faster to train.
Date of Conference: 06-11 April 2025
Date Added to IEEE Xplore: 07 March 2025
ISBN Information:

ISSN Information:

Conference Location: Hyderabad, India

References

References is not available for this document.