Algorithms for simultaneous sparse approximation: part II: Convex relaxation
Signal Processing - Sparse approximations in signal and image processing
Digital Signal Processing
Online dictionary learning for sparse coding
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Sparse image reconstruction for molecular imaging
IEEE Transactions on Image Processing
A plurality of sparse representations is better than the sparsest one alone
IEEE Transactions on Information Theory
Comparing measures of sparsity
IEEE Transactions on Information Theory
Further results on stable recovery of sparse overcomplete representations in the presence of noise
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
On recovery of sparse signals via l1 minimization
IEEE Transactions on Information Theory
Stagewise weak gradient pursuits
IEEE Transactions on Signal Processing
Online Learning for Matrix Factorization and Sparse Coding
The Journal of Machine Learning Research
Stable recovery of sparse signals and an oracle inequality
IEEE Transactions on Information Theory
Coherence-based performance guarantees for estimating a sparse vector under random noise
IEEE Transactions on Signal Processing
Improved stability conditions of BOGA for noisy block-sparse signals
Signal Processing
Rank aggregation via nuclear norm minimization
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Structured Variable Selection with Sparsity-Inducing Norms
The Journal of Machine Learning Research
A comparison of the lasso and marginal regression
The Journal of Machine Learning Research
Strengthening hash families and compressive sensing
Journal of Discrete Algorithms
Denoising for Multiple Image Copies through Joint Sparse Representation
Journal of Mathematical Imaging and Vision
Direct feedback control design for nonlinear systems
Automatica (Journal of IFAC)
An Evaluation of the Sparsity Degree for Sparse Recovery with Deterministic Measurement Matrices
Journal of Mathematical Imaging and Vision
Hi-index | 755.21 |
The purpose of this contribution is to extend some recent results on sparse representations of signals in redundant bases developed in the noise-free case to the case of noisy observations. The type of question addressed so far is as follows: given an (n,m)-matrix A with m>n and a vector b=Axo, i.e., admitting a sparse representation xo, find a sufficient condition for b to have a unique sparsest representation. The answer is a bound on the number of nonzero entries in xo. We consider the case b=Axo+e where xo satisfies the sparsity conditions requested in the noise-free case and e is a vector of additive noise or modeling errors, and seek conditions under which xo can be recovered from b in a sense to be defined. The conditions we obtain relate the noise energy to the signal level as well as to a parameter of the quadratic program we use to recover the unknown sparsest representation. When the signal-to-noise ratio is large enough, all the components of the signal are still present when the noise is deleted; otherwise, the smallest components of the signal are themselves erased in a quite rational and predictable way