Dictionary learning algorithms for sparse representation
Neural Computation
A Comparison of Signal Compression Methods by Sparse Solution of Linear Systems
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Learning Sparse Overcomplete Codes for Images
Journal of VLSI Signal Processing Systems
Digital Signal Processing
Nonlinear underdetermined blind signal separation using Bayesian neural network approach
Digital Signal Processing
Learning Sparse Overcomplete Codes for Images
Journal of VLSI Signal Processing Systems
Sparse coding via thresholding and local competition in neural circuits
Neural Computation
CG-M-FOCUSS and Its Application to Distributed Compressed Sensing
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
Review of user parameter-free robust adaptive beamforming algorithms
Digital Signal Processing
Improved FOCUSS method with conjugate gradient iterations
IEEE Transactions on Signal Processing
Compressive sensing reconstruction with prior information by iteratively reweighted least-squares
IEEE Transactions on Signal Processing
Efficient minimization method for a generalized total variation functional
IEEE Transactions on Image Processing
Comparing measures of sparsity
IEEE Transactions on Information Theory
Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data
ISBI'09 Proceedings of the Sixth IEEE international conference on Symposium on Biomedical Imaging: From Nano to Macro
Parametric dictionary design for sparse coding
IEEE Transactions on Signal Processing
On approximation of orientation distributions by means of spherical ridgelets
IEEE Transactions on Image Processing
Variance-component based sparse signal reconstruction and model selection
IEEE Transactions on Signal Processing
Improved Group Sparse Classifier
Pattern Recognition Letters
Minimizing nonconvex functions for sparse vector reconstruction
IEEE Transactions on Signal Processing
IEEE Transactions on Audio, Speech, and Language Processing - Special issue on processing reverberant speech: methodologies and applications
Evolution strategies based adaptive Lp LS-SVM
Information Sciences: an International Journal
Robust ISAR imaging based on compressive sensing from noisy measurements
Signal Processing
A novel predual dictionary learning algorithm
Journal of Visual Communication and Image Representation
Post-nonlinear underdetermined ICA by bayesian statistics
ICA'06 Proceedings of the 6th international conference on Independent Component Analysis and Blind Signal Separation
An augmented Lagrangian approach to general dictionary learning for image denoising
Journal of Visual Communication and Image Representation
Multi-resolutive sparse approximations of d-dimensional data
Computer Vision and Image Understanding
Iterative reweighted algorithms for matrix rank minimization
The Journal of Machine Learning Research
Proceedings of the International Conference on Bioinformatics, Computational Biology and Biomedical Informatics
A comparison of typical ℓp minimization algorithms
Neurocomputing
Multiple task learning using iteratively reweighted least square
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 35.76 |
A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser (1994) and Donoho (1994). These measures include the p-norm-like (l(p⩽1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodology uses a factored representation for the gradient and involves successive relaxation of the Lagrangian necessary condition. This yields algorithms that are intimately related to the affine scaling transformation (AST) based methods commonly employed by the interior point approach to nonlinear optimization. The algorithms minimizing the (l(p⩽1)) diversity measures are equivalent to a previously developed class of algorithms called focal underdetermined system solver (FOCUSS). The general nature of the methodology provides a systematic approach for deriving this class of algorithms and a natural mechanism for extending them. It also facilitates a better understanding of the convergence behavior and a strengthening of the convergence results. The Gaussian entropy minimization algorithm is shown to be equivalent to a well-behaved p=0 norm-like optimization algorithm. Computer experiments demonstrate that the p-norm-like and the Gaussian entropy algorithms perform well, converging to sparse solutions. The Shannon entropy algorithm produces solutions that are concentrated but are shown to not converge to a fully sparse solution