Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
Sparse Approximate Solutions to Linear Systems
SIAM Journal on Computing
Dictionary learning algorithms for sparse representation
Neural Computation
Blind Source Separation by Sparse Decomposition in a Signal Dictionary
Neural Computation
Dictionary learning for L1-exact sparse coding
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
Identification of Matrices Having a Sparse Representation
IEEE Transactions on Signal Processing
Extension of the Pisarenko method to sparse linear arrays
IEEE Transactions on Signal Processing
-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
IEEE Transactions on Signal Processing
Uncertainty principles and ideal atomic decomposition
IEEE Transactions on Information Theory
Sparse representations in unions of bases
IEEE Transactions on Information Theory
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
Just relax: convex programming methods for identifying sparse signals in noise
IEEE Transactions on Information Theory
Sparse component analysis and blind source separation of underdetermined mixtures
IEEE Transactions on Neural Networks
Dictionary learning for sparse representations: a Pareto curve root finding approach
LVA/ICA'10 Proceedings of the 9th international conference on Latent variable analysis and signal separation
Dictionary learning based impulse noise removal via L1-L1 minimization
Signal Processing
Online dictionary learning algorithm with periodic updates and its application to image denoising
Expert Systems with Applications: An International Journal
Hi-index | 754.84 |
This paper treats the problem of learning a dictionary providing sparse representations for a given signal class, via l1-minimization. The problem can also be seen as factorizing a d × N matrix Y = (y1...yN), yn ∈ Rd of training signals into a d × K dictionary matrix Φ and a K × N coefficient matrix X = (x1...xN), xn ∈ RK, which is sparse. The exact question studied here is when a dictionary coefficient pair (Φ, X) can be recovered as local minimum of a (nonconvex) l1-criterion with input Y = ΦX. First, for general dictionaries and coefficient matrices, algebraic conditions ensuring local identifiability are derived, which are then specialized to the case when the dictionary is a basis. Finally, assuming a random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown that sufficiently incoherent bases are locally identifiable with high probability. The perhaps surprising result is that the typically sufficient number of training samples N grows up to a logarithmic factor only linearly with the signal dimension, i.e., N ≈ CK log K , in contrast to previous approaches requiring combinatorially many samples.