By Topic

On the Performance of Sparse Recovery Via \ell _p -Minimization (0 \leq p \leq 1)

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Meng Wang ; School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA ; Weiyu Xu ; Ao Tang

It is known that a high-dimensional sparse vector x* in TV can be recovered from low-dimensional measurements y = Ax* where Am×n(m <; n) is the measurement matrix. In this paper, with A being a random Gaussian matrix, we investigate the recovering ability of ℓp-minimization (0 ≤ p ≤ 1) as p varies, where ℓp-minimization returns a vector with the least ℓp quasi norm among all the vectors x satisfying Ax = y. Besides analyzing the performance of strong recovery where ℓp-minimization is re quired to recover all the sparse vectors up to certain sparsity, we also for the first time analyze the performance of "weak" recovery of ℓp -minimization (0 ≤ p <; 1) where the aim is to recover all the sparse vectors on one support with a fixed sign pattern. When α(:= m/n) → 1, we provide sharp thresholds of the sparsity ratio (i.e., percentage of nonzero entries of a vector) that differentiates the success and failure via ℓp -minimization. For strong recovery, the threshold strictly decreases from 0.5 to 0.239 as p increases from 0 to 1. Surprisingly, for weak recovery, the threshold is 2/3 for all p in [0, 1), while the threshold is 1 for ℓ1-minimization. We also explicitly demonstrate that ℓp-minimization (p <; 1) can re turn a denser solution than ℓp-minimization. For any a G (0.1), we provide bounds of the sparsity ratio for strong recovery and weak recovery, respectively, below which ℓp-minimization succeeds. Our bound of strong recovery improves on the existing bounds when a is large. In particular, regarding the recovery threshold, this paper argues that ℓp-minimization has a higher threshold with smaller p for strong recovery; the threshold is the same for all p for sectional - - recovery; and ℓp -minimization can outperform ℓp-minimization for weak recovery. These are in contrast to traditional wisdom that ℓp-minimization, though computationally more expensive, always has better sparse recovery ability than ℓ0 -minimization since it is closer to ℓ1-minimization. Finally, we provide an intuitive explanation to our findings. Numerical examples are also used to un ambiguously confirm and illustrate the theoretical predictions.

Published in:

IEEE Transactions on Information Theory  (Volume:57 ,  Issue: 11 )