A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Outcomes of the equivalence of adaptive ridge with least absolute shrinkage
Proceedings of the 1998 conference on Advances in neural information processing systems II
Structural Modelling with Sparse Kernels
Machine Learning
Grafting: fast, incremental feature selection by gradient descent in function space
The Journal of Machine Learning Research
Shrinkage and model selection with correlated variables via weighted fusion
Computational Statistics & Data Analysis
Feature selection for support vector regression using probabilistic prediction
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
The Journal of Machine Learning Research
Journal of Computational Neuroscience
Robust feature selection based on regularized brownboost loss
Knowledge-Based Systems
Variable selection for generalized linear mixed models by L1-penalized estimation
Statistics and Computing
Hi-index | 0.00 |
LASSO (Least Absolute Shrinkage and Selection Operator) is a useful tool to achieve the shrinkage and variable selection simultaneously. Since LASSO uses the L1 penalty, the optimization should rely on the quadratic program (QP) or general non-linear program which is known to be computational intensive. In this paper, we propose a gradient descent algorithm for LASSO. Even though the final result is slightly less accurate, the proposed algorithm is computationally simpler than QP or non-linear program, and so can be applied to large size problems. We provide the convergence rate of the algorithm, and illustrate it with simulated models as well as real data sets.