By Topic

Dictionary Identification—Sparse Matrix-Factorization via \ell _1 -Minimization

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Gribonval, R. ; Project METISS, IRISA, Rennes, France ; Schnass, Karin

This paper treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1-minimization. The problem can also be seen as factorizing a d × N matrix Y = (y1 . . . yN), yn ∈ ℝd of training signals into a d × K dictionary matrix Φ and a K × N coefficient matrix X = (x1 . . . xN), xn ∈ ℝK, which is sparse. The exact question studied here is when a dictionary coefficient pair (Φ, X) can be recovered as local minimum of a (nonconvex) ℓ1-criterion with input Y = Φ X. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialized to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e., N ≈ CK log K, in contrast to previous approaches requiring combinatorially many samples.

Published in:

Information Theory, IEEE Transactions on  (Volume:56 ,  Issue: 7 )