Abstract:
We consider the problem of sparse estimation in a Bayesian framework. We outline the derivation of the Lasso in terms of marginalization of a particular Bayesian model. A...Show MoreMetadata
Abstract:
We consider the problem of sparse estimation in a Bayesian framework. We outline the derivation of the Lasso in terms of marginalization of a particular Bayesian model. A different marginalization of the same model leads to a different nonconvex estimator where hyperparameters are optimized. The arguments are extended to problems where groups of variables have to be estimated. An approach alternative to Group Lasso is derived, also providing its connection with Multiple Kernel Learning. Our estimator is nonconvex but one of its versions requires optimization with respect to only one scalar variable. Theoretical arguments and numerical experiments show that the new technique obtains sparse solutions more accurate than the other two convex estimators.
Date of Conference: 12-15 December 2011
Date Added to IEEE Xplore: 01 March 2012
ISBN Information: