By Topic

Support Recovery With Sparsely Sampled Free Random Matrices

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Tulino, A.M. ; Bell Labs., Alcatel-Lucent, Holmdel, NJ, USA ; Caire, G. ; Verdu, S. ; Shamai, S.

Consider a Bernoulli-Gaussian complex n-vector whose components are Vi = XiBi, with Xi ~ C N(0, Px) and binary Bi mutually independent and iid across i. This random q-sparse vector is multiplied by a square random matrix U, and a randomly chosen subset, of average size n p, p ∈ [0,1], of the resulting vector components is then observed in additive Gaussian noise. We extend the scope of conventional noisy compressive sampling models where U is typically a matrix with iid components, to allow U satisfying a certain freeness condition. This class of matrices encompasses Haar matrices and other unitarily invariant matrices. We use the replica method and the decoupling principle of Guo and Verdú, as well as a number of information-theoretic bounds, to study the input-output mutual information and the support recovery error rate in the limit of n → ∞. We also extend the scope of the large deviation approach of Rangan and characterize the performance of a class of estimators encompassing thresholded linear MMSE and l1 relaxation.

Published in:

Information Theory, IEEE Transactions on  (Volume:59 ,  Issue: 7 )