Sparsity Regularization for Radon Measures
SSVM '09 Proceedings of the Second International Conference on Scale Space and Variational Methods in Computer Vision
Nonlinear filtering for sparse signal recovery from incomplete measurements
IEEE Transactions on Signal Processing
Adaptive wavelet methods and sparsity reconstruction for inverse heat conduction problems
Advances in Computational Mathematics
Randomization of data acquisition and l1-optimization (recognition with compression)
Automation and Remote Control
Exact optimization for the l1-Compressive Sensing problem using a modified Dantzig-Wolfe method
Theoretical Computer Science
Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization
The Journal of Machine Learning Research
Heuristic Parameter-Choice Rules for Convex Variational Regularization Based on Error Estimates
SIAM Journal on Numerical Analysis
MOEA/D with iterative thresholding algorithm for sparse optimization problems
PPSN'12 Proceedings of the 12th international conference on Parallel Problem Solving from Nature - Volume Part II
A Simple Compressive Sensing Algorithm for Parallel Many-Core Architectures
Journal of Signal Processing Systems
A fast algorithm for nonconvex approaches to sparse recovery problems
Signal Processing
Hi-index | 0.01 |
A new iterative algorithm for the solution of minimization problems in infinite-dimensional Hilbert spaces which involve sparsity constraints in form of $\ell^{p}$-penalties is proposed. In contrast to the well-known algorithm considered by Daubechies, Defrise, and De Mol, it uses hard instead of soft shrinkage. It is shown that the hard shrinkage algorithm is a special case of the generalized conditional gradient method. Convergence properties of the generalized conditional gradient method with quadratic discrepancy term are analyzed. This leads to strong convergence of the iterates with convergence rates $\mathcal{O}(n^{-1/2})$ and $\mathcal{O}(\lambda^n)$ for $p=1$ and $1