Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Algorithms for simultaneous sparse approximation: part I: Greedy pursuit
Signal Processing - Sparse approximations in signal and image processing
lP minimization for sparse vector reconstruction
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
A fast approach for overcomplete sparse decomposition based on smoothed l0 norm
IEEE Transactions on Signal Processing
Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
SIAM Journal on Imaging Sciences
-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
IEEE Transactions on Signal Processing
An affine scaling methodology for best basis selection
IEEE Transactions on Signal Processing
Sparse signal reconstruction from limited data using FOCUSS: are-weighted minimum norm algorithm
IEEE Transactions on Signal Processing
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Fast Solution of -Norm Minimization Problems When the Solution May Be Sparse
IEEE Transactions on Information Theory
A fast algorithm for nonconvex approaches to sparse recovery problems
Signal Processing
A comparison of typical ℓp minimization algorithms
Neurocomputing
Hi-index | 35.68 |
In this paper, we develop a novel methodology for minimizing a class of nonconvex (concave on the non-negative orthant) functions for solving an underdetermined linear system of equations As=x when the solution vector sis known a priori to be sparse. The proposed technique is based on locally replacing the original objective function by a quadratic convex function which is easily minimized. The resulting algorithm is iterative and is absolutely converging to a fixed point of the original objective function. For a certain selection of convex objective functions, the class of algorithms called iterative reweighted least squares (IRLS) is shown to be a special case of the proposed methodology. Thus, the proposed algorithms are a generalization and unification of the previous methods. In addition, we also propose a new class of algorithms with better convergence properties compared to the regular IRLS algorithms and, hence, can be considered as enhancements to these algorithms. Since the original objective functions are nonconvex, the proposed algorithm is susceptible to convergence to a local minimum. To alleviate this difficulty, we propose a random perturbation technique that enhances the performance of the proposed algorithm. The numerical results show that the proposed algorithms outperform some of the well-known algorithms that are usually utilized for solving the same problem.