A Mathematical Theory of Communication
A Mathematical Theory of Communication
Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions
Foundations of Computational Mathematics
Algorithms for simultaneous sparse approximation: part I: Greedy pursuit
Signal Processing - Sparse approximations in signal and image processing
Algorithms for simultaneous sparse approximation: part II: Convex relaxation
Signal Processing - Sparse approximations in signal and image processing
Random Projections of Smooth Manifolds
Foundations of Computational Mathematics
Compressed sensing and source separation
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
Theoretical Results on Sparse Representations of Multiple-Measurement Vectors
IEEE Transactions on Signal Processing
Sparse signal reconstruction from limited data using FOCUSS: are-weighted minimum norm algorithm
IEEE Transactions on Signal Processing
A Theory for Sampling Signals From a Union of Subspaces
IEEE Transactions on Signal Processing
Sampling signals with finite rate of innovation
IEEE Transactions on Signal Processing
Sparse solutions to linear inverse problems with multiple measurement vectors
IEEE Transactions on Signal Processing
Decoding by linear programming
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Signal Reconstruction From Noisy Random Projections
IEEE Transactions on Information Theory
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Compressed Sensing and Redundant Dictionaries
IEEE Transactions on Information Theory
Exploiting structure in wavelet-based Bayesian compressive sensing
IEEE Transactions on Signal Processing
Compressed sensing of analog signals in shift-invariant spaces
IEEE Transactions on Signal Processing
Block sparsity and sampling over a union of subspaces
DSP'09 Proceedings of the 16th international conference on Digital Signal Processing
Robust recovery of signals from a structured union of subspaces
IEEE Transactions on Information Theory
Compressive sensing of streams of pulses
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Model-based compressive sensing for signal ensembles
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Time-delay estimation from low-rate samples: a union of subspaces approach
IEEE Transactions on Signal Processing
Block-sparse signals: uncertainty relations and efficient recovery
IEEE Transactions on Signal Processing
Model-based compressive sensing
IEEE Transactions on Information Theory
IEEE Transactions on Signal Processing
Improved stability conditions of BOGA for noisy block-sparse signals
Signal Processing
Bayesian compressive sensing for cluster structured sparse signals
Signal Processing
Editor's Choice Article: Hierarchical classification of images by sparse approximation
Image and Vision Computing
Greedy feature selection for subspace clustering
The Journal of Machine Learning Research
Hi-index | 754.97 |
Compressed sensing is an emerging signal acquisition technique that enables signals to be sampled well belowthe Nyquist rate, given that the signal has a sparse representation in an orthonormal basis. In fact, sparsity in an orthonormal basis is only one possible signal model that allows for sampling strategies below the Nyquist rate. In this paper, we consider a more general signal model and assume signals that live on or close to the union of linear subspaces of lowdimension.We present sampling theorems for this model that are in the same spirit as the Nyquist-Shannon sampling theorem in that they connect the number of required samples to certain model parameters. Contrary to the Nyquist-Shannon sampling theorem, which gives a necessary and sufficient condition for the number of required samples as well as a simple linear algorithm for signal reconstruction, the model studied here is more complex. We therefore concentrate on two aspects of the signal model, the existence of one to one maps to lower dimensional observation spaces and the smoothness of the inverse map. We show that almost all linear maps are one to one when the observation space is at least of the same dimension as the largest dimension of the convex hull of the union of any two subspaces in the model. However, we also show that in order for the inverse map to have certain smoothness properties such as a given finite Lipschitz constant, the required observation dimension necessarily depends logarithmically on the number of subspaces in the signal model. In other words, while unique linear sampling schemes require a small number of samples depending only on the dimension of the subspaces involved, in order to have stable sampling methods, the number of samples depends necessarily logarithmically on the number of subspaces in the model. These results are then applied to two examples, the standard compressed sensing signal model in which the signal has a sparse representation in an orthonormal basis and to a sparse signal model with additional tree structure.