Global reduction in wormhole k-ary n-cube networks with multidestination exchange worms
IPPS '95 Proceedings of the 9th International Symposium on Parallel Processing
Solving large scale linear prediction problems using stochastic gradient descent algorithms
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Mining diagnostic rules from clinical databases using rough sets and medical diagnostic model
Information Sciences: an International Journal - Special issue: Medical expert systems
Algorithms for Sparse Linear Classifiers in the Massive Data Setting
The Journal of Machine Learning Research
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
A case-based technique for tracking concept drift in spam filtering
Knowledge-Based Systems
A Fast Hybrid Algorithm for Large-Scale l1-Regularized Logistic Regression
The Journal of Machine Learning Research
Sublinear Optimization for Machine Learning
FOCS '10 Proceedings of the 2010 IEEE 51st Annual Symposium on Foundations of Computer Science
Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization
The Journal of Machine Learning Research
Hi-index | 0.00 |
Penalized logistic regression (PLR) is a widely used supervised learning model. In this paper, we consider its applications in large-scale data problems and resort to a stochastic primal-dual approach for solving PLR. In particular, we employ a random sampling technique in the primal step and a multiplicative weights method in the dual step. This technique leads to an optimization method with sublinear dependency on both the volume and dimensionality of training data. We develop concrete algorithms for PLR with ℓ2-norm and ℓ1-norm penalties, respectively. Experimental results over several large-scale and high-dimensional datasets demonstrate both efficiency and accuracy of our algorithms.