Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Large Scale Kernel Regression via Linear Programming
Machine Learning
Mathematical Programming for Data Mining: Formulations and Challenges
INFORMS Journal on Computing
Wavelets and curvelets for image deconvolution: a combined approach
Signal Processing - Special section: Security of data hiding technologies
RCV1: A New Benchmark Collection for Text Categorization Research
The Journal of Machine Learning Research
Feature selection, L1 vs. L2 regularization, and rotational invariance
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Extensions of compressed sensing
Signal Processing - Sparse approximations in signal and image processing
An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression
The Journal of Machine Learning Research
Trust Region Newton Method for Logistic Regression
The Journal of Machine Learning Research
A coordinate gradient descent method for nonsmooth separable minimization
Mathematical Programming: Series A and B
Efficient regularized solution path algorithms with applications in machine learning and data mining
Efficient regularized solution path algorithms with applications in machine learning and data mining
EfficientL1regularized logistic regression
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Sparse reconstruction by separable approximation
IEEE Transactions on Signal Processing
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
On sparse representations in arbitrary redundant bases
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Just relax: convex programming methods for identifying sparse signals in noise
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Fast Solution of -Norm Minimization Problems When the Solution May Be Sparse
IEEE Transactions on Information Theory
An EM algorithm for wavelet-based image restoration
IEEE Transactions on Image Processing
The Journal of Machine Learning Research
An improved GLMNET for l1-regularized logistic regression
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Alternating Direction Algorithms for $\ell_1$-Problems in Compressive Sensing
SIAM Journal on Scientific Computing
Fast bregman divergence NMF using taylor expansion and coordinate descent
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Stochastic coordinate descent methods for regularized smooth and nonsmooth losses
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Computational Optimization and Applications
Hi-index | 0.00 |
In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing l1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general l1-regularized convex minimization problems, i.e., the problem of minimizing an l1-regularized convex smooth function. We establish a Q-linear convergence rate for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We propose efficient implementations of the CGD method and report numerical results for solving large-scale l1-regularized linear least squares problems arising in compressed sensing and image deconvolution as well as large-scale l1-regularized logistic regression problems for feature selection in data classification. Comparison with several state-of-the-art algorithms specifically designed for solving large-scale l1-regularized linear least squares or logistic regression problems suggests that an efficiently implemented CGD method may outperform these algorithms despite the fact that the CGD method is not specifically designed just to solve these special classes of problems.