Loading [a11y]/accessibility-menu.js
A hierarchical sparsity-smoothness Bayesian model for ℓ0 + ℓ1 + ℓ2 regularization | IEEE Conference Publication | IEEE Xplore

A hierarchical sparsity-smoothness Bayesian model for ℓ0 + ℓ1 + ℓ2 regularization


Abstract:

Sparse signal/image recovery is a challenging topic that has captured a great interest during the last decades. To address the ill-posedness of the related inverse proble...Show More

Abstract:

Sparse signal/image recovery is a challenging topic that has captured a great interest during the last decades. To address the ill-posedness of the related inverse problem, regularization is often essential by using appropriate priors that promote the sparsity of the target signal/image. In this context, ℓ0 + ℓ1 regularization has been widely investigated. In this paper, we introduce a new prior accounting simultaneously for both sparsity and smoothness of restored signals. We use a Bernoulli-generalized Gauss-Laplace distribution to perform ℓ0 + ℓ1 + ℓ2 regularization in a Bayesian framework. Our results show the potential of the proposed approach especially in restoring the non-zero coefficients of the signal/image of interest.
Date of Conference: 04-09 May 2014
Date Added to IEEE Xplore: 14 July 2014
Electronic ISBN:978-1-4799-2893-4

ISSN Information:

Conference Location: Florence, Italy
References is not available for this document.

1. INTRODUCTION

Sparse signal and image restoration is an open issue and has been the focus of numerous works during the last decades. More recently, and due to the emergence of the compressed sensing theory [1], sparse models have gained more interest. Indeed, recent applications generally produce large data sets that have the particularity to be highly sparse in a transformed domain. Since these data are generally modeled using ill-posed observation systems, regularization is usually required to improve the quality of the reconstructed signals/images through the use of appropriate prior information. A natural way to promote sparsity is to penalize or constrain the pseudo-norm of the reconstructed signal. Unfortunately, optimizing the resulting criterion is a combinatorial problem. Suboptimal greedy algorithms, such as matching pursuit [2] or its orthogonal counterpart [3] may provide reasonable solutions to this NP-hard problem. However, despite recent advances which made the penalized problem feasible in a variational framework [4], fixing the regularization hyperparameters is still an open issue. Conversely, the solutions of the -penalized problem can coincide with those of a -penalized problem [5] provided that appropriate sufficient conditions are fulfilled. Based on this convex relaxation of the problem, an amount of works has been conducted to propose efficient algorithms to solve -penalized problems (see for instance [6], [7]). Again, choosing appropriate values for the hyperparameters associated with the -penalized (or the -constrained) problems remains a difficult task [8]. These hyperparameters can for instance be estimated using empirical assessments, cross-validation or some external empirical Bayes approaches such as [9], [10]. In this context, fully Bayesian approaches have demonstrated their flexibility to overcome these issues. More specifically, Bernoulli-based models [11]–[14] have been proven to be efficient tools to build sparsity promoting priors. Moreover, these Bayesian approaches allow the target signal and the regularization hyperparameters to be jointly estimated directly from the data, avoiding a difficult and painful tuning of these regularization hyperparameters.

References is not available for this document.

Contact IEEE to Subscribe

References

References is not available for this document.