Eigenvalues and condition numbers of random matrices
SIAM Journal on Matrix Analysis and Applications
Sparse Approximate Solutions to Linear Systems
SIAM Journal on Computing
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Extensions of compressed sensing
Signal Processing - Sparse approximations in signal and image processing
Uncertainty principles and ideal atomic decomposition
IEEE Transactions on Information Theory
A generalized uncertainty principle and sparse representation in pairs of bases
IEEE Transactions on Information Theory
Sparse representations in unions of bases
IEEE Transactions on Information Theory
On sparse representations in arbitrary redundant bases
IEEE Transactions on Information Theory
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
Stable recovery of sparse overcomplete representations in the presence of noise
IEEE Transactions on Information Theory
A compact cooperative recurrent neural network for computing general constrained L1norm estimators
IEEE Transactions on Signal Processing
Direction finding of multiple emitters by spatial sparsity and linear programming
ISCIT'09 Proceedings of the 9th international conference on Communications and information technologies
Bounds on the number of identifiable outliers in source localization by linear programming
IEEE Transactions on Signal Processing
Error bounds for convex parameter estimation
Signal Processing
Face hallucination based on morphological component analysis
Signal Processing
Hi-index | 0.01 |
Finding the sparsest solution to a set of underdetermined linear equations is NP-hard in general. However, recent research has shown that for certain systems of linear equations, the sparsest solution (i.e. the solution with the smallest number of nonzeros), is also the solution with minimal l1 norm, and so can be found by a computationally tractable method.For a given n by m matrix Φ defining a system y=Φα, with n making the system underdetermined, this phenomenon holds whenever there exists a 'sufficiently sparse' solution α0. We quantify the 'sufficient sparsity' condition, defining an equivalence breakdown point (EBP): the degree of sparsity of α required to guarantee equivalence to hold; this threshold depends on the matrix Φ.In this paper we study the size of the EBP for 'typical' matrices with unit norm columns (the uniform spherical ensemble (USE)); Donoho showed that for such matrices Φ, the EBP is at least proportional to n. We distinguish three notions of breakdown point--global, local, and individual--and describe a semi-empirical heuristic for predicting the local EBP at this ensemble. Our heuristic identifies a configuration which can cause breakdown, and predicts the level of sparsity required to avoid that situation. In experiments, our heuristic provides upper and lower bounds bracketing the EBP for 'typical' matrices in the USE. For instance, for an n × m matrix Φn,m with m = 2n, our heuristic predicts breakdown of local equivalence when the coefficient vector α has about 30% nonzeros (relative to the reduced dimension n). This figure reliably describes the observed empirical behavior. A rough approximation to the observed breakdown point is provided by the simple formula 0.44 ċ n/log(2m/n).There are many matrix ensembles of interest outside the USE; our heuristic may be useful in speeding up empirical studies of breakdown point at such ensembles. Rather than solving numerous linear programming problems per n, m combination, at least several for each degree of sparsity, the heuristic suggests to conduct a few experiments to measure the driving term of the heuristic and derive predictive bounds. We tested the applicability of this heuristic to three special ensembles of matrices, including the partial Hadamard ensemble and the partial Fourier ensemble, and found that it accurately predicts the sparsity level at which local equivalence breakdown occurs, which is at a lower level than for the USE. A rough approximation to the prediction is provided by the simple formula 0.65 ċ n/log(1 + 10m/n).