A Bayesian approach to on-line learning
On-line learning in neural networks
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Text Categorization Based on Regularized Linear Classification Methods
Information Retrieval
Bayesian parameter estimation via variational methods
Statistics and Computing
Expectation Propagation for approximate Bayesian inference
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
A family of algorithms for approximate bayesian inference
A family of algorithms for approximate bayesian inference
RCV1: A New Benchmark Collection for Text Categorization Research
The Journal of Machine Learning Research
Predictive automatic relevance determination by expectation propagation
ICML '04 Proceedings of the twenty-first international conference on Machine learning
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Sparse Multinomial Logistic Regression: Fast Algorithms and Generalization Bounds
IEEE Transactions on Pattern Analysis and Machine Intelligence
An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression
The Journal of Machine Learning Research
Debellor: A Data Mining Platform with Stream Architecture
Transactions on Rough Sets IX
Recovering sparse signals with a certain family of nonconvex penalties and DC programming
IEEE Transactions on Signal Processing
Online learning for multi-task feature selection
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Expert Systems with Applications: An International Journal
Computational Statistics & Data Analysis
Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization
The Journal of Machine Learning Research
Sublinear algorithms for penalized logistic regression in massive datasets
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Efficient online learning for multitask feature selection
ACM Transactions on Knowledge Discovery from Data (TKDD)
Hi-index | 0.01 |
Classifiers favoring sparse solutions, such as support vector machines, relevance vector machines, LASSO-regression based classifiers, etc., provide competitive methods for classification problems in high dimensions. However, current algorithms for training sparse classifiers typically scale quite unfavorably with respect to the number of training examples. This paper proposes online and multi-pass algorithms for training sparse linear classifiers for high dimensional data. These algorithms have computational complexity and memory requirements that make learning on massive data sets feasible. The central idea that makes this possible is a straightforward quadratic approximation to the likelihood function.