The Johnson-Lindenstrauss Lemma and the sphericity of some graphs
Journal of Combinatorial Theory Series A
Approximate nearest neighbors: towards removing the curse of dimensionality
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
An elementary proof of a theorem of Johnson and Lindenstrauss
Random Structures & Algorithms
Dimension Reduction in the \ell _1 Norm
FOCS '02 Proceedings of the 43rd Symposium on Foundations of Computer Science
On variants of the Johnson–Lindenstrauss lemma
Random Structures & Algorithms
Sparse Signal Reconstruction from Noisy Compressive Measurements using Cross Validation
SSP '07 Proceedings of the 2007 IEEE/SP 14th Workshop on Statistical Signal Processing
Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit
Foundations of Computational Mathematics
Fast and efficient dimensionality reduction using Structurally Random Matrices
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Stability and Instance Optimality for Gaussian Measurements in Compressed Sensing
Foundations of Computational Mathematics
Decoding by linear programming
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
IEEE Transactions on Information Theory
A non-adapted sparse approximation of PDEs with stochastic inputs
Journal of Computational Physics
Analysis and Generalizations of the Linearized Bregman Method
SIAM Journal on Imaging Sciences
Low-rank Matrix Recovery via Iteratively Reweighted Least Squares Minimization
SIAM Journal on Optimization
Hi-index | 754.84 |
Compressed sensing (CS) decoding algorithms can efficiently recover an N-dimensional real-valued vector x to within a factor of its best k-term approximation by taking m = O(k log N/k) measurements y = Φx. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would imply quality assurance of the resulting CS estimate. However, because the underlying sparsity of the signal x is unknown, the quality of a CS estimate x using m measurements is not assured. It is nevertheless shown in this paper that sharp bounds on the error ∥x - x∥l2N can be achieved with almost no effort. More precisely, suppose that a maximum number of measurements m is preimposed. One can reserve 10 logp of these m measurements and compute a sequence of possible estimates (xj)jp=1 to x from the m - 10logp remaining measurements; the errors ∥x - xj∥l2N for j = 1,...,p can then be bounded with high probability. As a consequence, numerical upper and lower bounds on the error between x and the best k-term approximation to x can be estimated for p values of k with almost no cost. This observation has applications outside CS as well.