ACM Transactions on Mathematical Software (TOMS)
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
On the convergence of the coordinate descent method for convex differentiable minimization
Journal of Optimization Theory and Applications
A fast, compact approximation of the exponential function
Neural Computation
Text Categorization Based on Regularized Linear Classification Methods
Information Retrieval
RCV1: A New Benchmark Collection for Text Categorization Research
The Journal of Machine Learning Research
Solving large scale linear prediction problems using stochastic gradient descent algorithms
ICML '04 Proceedings of the twenty-first international conference on Machine learning
A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs
The Journal of Machine Learning Research
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
A dual coordinate descent method for large-scale linear SVM
Proceedings of the 25th international conference on Machine learning
Trust Region Newton Method for Logistic Regression
The Journal of Machine Learning Research
Iterative scaling and coordinate descent methods for maximum entropy
ACLShort '09 Proceedings of the ACL-IJCNLP 2009 Conference Short Papers
Iterative Scaling and Coordinate Descent Methods for Maximum Entropy Models
The Journal of Machine Learning Research
Fast and Scalable Local Kernel Machines
The Journal of Machine Learning Research
Integrating neural networks and logistic regression to underpin hyper-heuristic search
Knowledge-Based Systems
Condensed vector machines: learning fast machine for large data
IEEE Transactions on Neural Networks
The Journal of Machine Learning Research
An improved GLMNET for l1-regularized logistic regression
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
An improved GLMNET for L1-regularized logistic regression
The Journal of Machine Learning Research
Stochastic coordinate descent methods for regularized smooth and nonsmooth losses
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
A fast parallel SGD for matrix factorization in shared memory systems
Proceedings of the 7th ACM conference on Recommender systems
Large-scale linear nonparallel support vector machine solver
Neural Networks
Hi-index | 0.00 |
Linear support vector machines (SVM) are useful for classifying large-scale sparse data. Problems with sparse features are common in applications such as document classification and natural language processing. In this paper, we propose a novel coordinate descent algorithm for training linear SVM with the L2-loss function. At each step, the proposed method minimizes a one-variable sub-problem while fixing other variables. The sub-problem is solved by Newton steps with the line search technique. The procedure globally converges at the linear rate. As each sub-problem involves only values of a corresponding feature, the proposed approach is suitable when accessing a feature is more convenient than accessing an instance. Experiments show that our method is more efficient and stable than state of the art methods such as Pegasos and TRON.