A Theory for Multiresolution Signal Decomposition: The Wavelet Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Condition numbers of random matrices
Journal of Complexity
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Algorithms for simultaneous sparse approximation: part I: Greedy pursuit
Signal Processing - Sparse approximations in signal and image processing
Algorithms for simultaneous sparse approximation: part II: Convex relaxation
Signal Processing - Sparse approximations in signal and image processing
Characterization of oblique dual frame pairs
EURASIP Journal on Applied Signal Processing
Blind multiband signal reconstruction: compressed sensing for analog signals
IEEE Transactions on Signal Processing
Compressed sensing of analog signals in shift-invariant spaces
IEEE Transactions on Signal Processing
Sampling theorems for signals from the union of finite-dimensional linear subspaces
IEEE Transactions on Information Theory
Uncertainty relations for shift-invariant analog signals
IEEE Transactions on Information Theory
Block-sparse signals: uncertainty relations and efficient recovery
IEEE Transactions on Signal Processing
Average case analysis of multichannel sparse recovery using convex relaxation
IEEE Transactions on Information Theory
A sparse signal reconstruction perspective for source localization with sensor arrays
IEEE Transactions on Signal Processing - Part II
Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: Shannon Meets Strang–Fix
IEEE Transactions on Signal Processing
Theoretical Results on Sparse Representations of Multiple-Measurement Vectors
IEEE Transactions on Signal Processing
Generalized sampling theorems in multiresolution subspaces
IEEE Transactions on Signal Processing
Nonlinear and Nonideal Sampling: Theory and Methods
IEEE Transactions on Signal Processing
A Theory for Sampling Signals From a Union of Subspaces
IEEE Transactions on Signal Processing
Sampling signals with finite rate of innovation
IEEE Transactions on Signal Processing
A minimum squared-error framework for generalized sampling
IEEE Transactions on Signal Processing - Part I
Reduce and Boost: Recovering Arbitrary Sets of Jointly Sparse Vectors
IEEE Transactions on Signal Processing - Part I
Sparse solutions to linear inverse problems with multiple measurement vectors
IEEE Transactions on Signal Processing
Decoding by linear programming
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Blind multiband signal reconstruction: compressed sensing for analog signals
IEEE Transactions on Signal Processing
Compressed sensing of analog signals in shift-invariant spaces
IEEE Transactions on Signal Processing
Block sparsity and sampling over a union of subspaces
DSP'09 Proceedings of the 16th international conference on Digital Signal Processing
Uncertainty relations for shift-invariant analog signals
IEEE Transactions on Information Theory
Model-based compressive sensing for signal ensembles
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
IEEE Transactions on Signal Processing
Recovering signals from lowpass data
IEEE Transactions on Signal Processing
Time-delay estimation from low-rate samples: a union of subspaces approach
IEEE Transactions on Signal Processing
Block-sparse signals: uncertainty relations and efficient recovery
IEEE Transactions on Signal Processing
The Cramér-Rao bound for estimating a sparse parameter vector
IEEE Transactions on Signal Processing
Model-based compressive sensing
IEEE Transactions on Information Theory
Average case analysis of multichannel sparse recovery using convex relaxation
IEEE Transactions on Information Theory
Compressed sensing of color images
Signal Processing
Dense error correction via l1-minimization
IEEE Transactions on Information Theory
Asymptotic analysis of robust LASSOs in the presence of noise with large variance
IEEE Transactions on Information Theory
Robust classifiers for data reduced via random projections
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Coherence-based performance guarantees for estimating a sparse vector under random noise
IEEE Transactions on Signal Processing
Super-resolution with sparse mixing estimators
IEEE Transactions on Image Processing
Fast block-sparse decomposition based on SL0
LVA/ICA'10 Proceedings of the 9th international conference on Latent variable analysis and signal separation
Improved stability conditions of BOGA for noisy block-sparse signals
Signal Processing
Decentralized cooperative compressed spectrum sensing for block sparse signals
Proceedings of the 4th International Conference on Cognitive Radio and Advanced Spectrum Management
Robust visual tracking with structured sparse representation appearance model
Pattern Recognition
A fast tri-factorization method for low-rank matrix recovery and completion
Pattern Recognition
ICALP'13 Proceedings of the 40th international conference on Automata, Languages, and Programming - Volume Part I
Block-sparse recovery via redundant block OMP
Signal Processing
Journal of Mathematical Imaging and Vision
Hi-index | 755.16 |
Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which of ten guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper, we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modeled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose nonzero elements appear in fixed blocks. We then propose a mixed l2/l1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP, we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.