Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Blind Source Separation by Sparse Decomposition in a Signal Dictionary
Neural Computation
IEEE Transactions on Signal Processing
Sparse signal reconstruction from limited data using FOCUSS: are-weighted minimum norm algorithm
IEEE Transactions on Signal Processing
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
Stable recovery of sparse overcomplete representations in the presence of noise
IEEE Transactions on Information Theory
The curvelet transform for image denoising
IEEE Transactions on Image Processing
Sparse geometric image representations with bandelets
IEEE Transactions on Image Processing
A plurality of sparse representations is better than the sparsest one alone
IEEE Transactions on Information Theory
Two-dimensional random projection
Signal Processing
Analysis and Generalizations of the Linearized Bregman Method
SIAM Journal on Imaging Sciences
Hi-index | 0.06 |
Approximating a signal or an image with a sparse linear expansion from an overcomplete dictionary of atoms is an extremely useful tool to solve many signal processing problems. Finding the sparsest approximation of a signal from an arbitrary dictionary is a NP-hard problem. Despite this, several algorithms have been proposed that provide sub-optimal solutions. However, it is generally difficult to know how close the computed solution is to being "optimal", and whether another algorithm could provide a better result. In this paper we provide a simple test to check whether the output of a sparse approximation algorithm is nearly optimal, in the sense that no significantly different linear expansion from the dictionary can provide both a smaller approximation error and a better sparsity. As a by-product of our theorems, we obtain results on the identifiability of sparse overcomplete models in the presence of noise, for a fairly large class of sparse priors.