By Topic

The LASSO Risk for Gaussian Matrices

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Bayati, M. ; Grad. Sch. of Bus., Stanford Univ., Stanford, CA, USA ; Montanari, A.

We consider the problem of learning a coefficient vector xο ∈ RN from noisy linear observation y = Axo + ∈ Rn. In many contexts (ranging from model selection to image processing), it is desirable to construct a sparse estimator x̂. In this case, a popular approach consists in solving an ℓ1-penalized least-squares problem known as the LASSO or basis pursuit denoising. For sequences of matrices A of increasing dimensions, with independent Gaussian entries, we prove that the normalized risk of the LASSO converges to a limit, and we obtain an explicit expression for this limit. Our result is the first rigorous derivation of an explicit formula for the asymptotic mean square error of the LASSO for random instances. The proof technique is based on the analysis of AMP, a recently developed efficient algorithm, that is inspired from graphical model ideas. Simulations on real data matrices suggest that our results can be relevant in a broad array of practical applications.

Published in:

Information Theory, IEEE Transactions on  (Volume:58 ,  Issue: 4 )