Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Combining support vector and mathematical programming methods for classification
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Parallel Optimization: Theory, Algorithms and Applications
Parallel Optimization: Theory, Algorithms and Applications
RCV1: A New Benchmark Collection for Text Categorization Research
The Journal of Machine Learning Research
Exact 1-Norm Support Vector Machines Via Unconstrained Convex Differentiable Minimization
The Journal of Machine Learning Research
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Linear Programming Boosting by Column and Row Generation
DS '09 Proceedings of the 12th International Conference on Discovery Science
Developing Position Structure-Based Framework for Chinese Entity Relation Extraction
ACM Transactions on Asian Language Information Processing (TALIP)
Hi-index | 0.00 |
This paper presents a decomposition method for efficiently constructing ℓ1-norm Support Vector Machines (SVMs). The decomposition algorithm introduced in this paper possesses many desirable properties. For example, it is provably convergent, scales well to large datasets, is easy to implement, and can be extended to handle support vector regression and other SVM variants. We demonstrate the efficiency of our algorithm by training on (dense) synthetic datasets of sizes up to 20 million points (in ℝ32). The results show our algorithm to be several orders of magnitude faster than a previously published method for the same task. We also present experimental results on real data sets—our method is seen to be not only very fast, but also highly competitive against the leading SVM implementations.