The nature of statistical learning theory
The nature of statistical learning theory
Multiple centrality corrections in a primal-dual method for linear programming
Computational Optimization and Applications
Primal-dual interior-point methods
Primal-dual interior-point methods
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Algorithm 539: Basic Linear Algebra Subprograms for Fortran Usage [F1]
ACM Transactions on Mathematical Software (TOMS)
Interior-Point Methods for Massive Support Vector Machines
SIAM Journal on Optimization
Object-oriented software for quadratic programming
ACM Transactions on Mathematical Software (TOMS)
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
New approaches to support vector ordinal regression
ICML '05 Proceedings of the 22nd international conference on Machine learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Building Support Vector Machines with Reduced Classifier Complexity
The Journal of Machine Learning Research
A convergent decomposition algorithm for support vector machines
Computational Optimization and Applications
A dual coordinate descent method for large-scale linear SVM
Proceedings of the 25th international conference on Machine learning
Hybrid MPI/OpenMP Parallel Linear Support Vector Machine Training
The Journal of Machine Learning Research
Successive overrelaxation for support vector machines
IEEE Transactions on Neural Networks
Automatic generation of story highlights
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Title generation with quasi-synchronous grammar
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Review: Supervised classification and mathematical optimization
Computers and Operations Research
Multiple aspect summarization using integer linear programming
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Hi-index | 0.00 |
Linear support vector machine training can be represented as a large quadratic program. We present an efficient and numerically stable algorithm for this problem using interior point methods, which requires only $\mathcal{O}(n)$ operations per iteration. Through exploiting the separability of the Hessian, we provide a unified approach, from an optimization perspective, to 1-norm classification, 2-norm classification, universum classification, ordinal regression and 驴-insensitive regression. Our approach has the added advantage of obtaining the hyperplane weights and bias directly from the solver. Numerical experiments indicate that, in contrast to existing methods, the algorithm is largely unaffected by noisy data, and they show training times for our implementation are consistent and highly competitive. We discuss the effect of using multiple correctors, and monitoring the angle of the normal to the hyperplane to determine termination.