A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
Making large-scale support vector machine learning practical
Advances in kernel methods
A Simple Decomposition Method for Support Vector Machines
Machine Learning
Object-oriented software for quadratic programming
ACM Transactions on Mathematical Software (TOMS)
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems
The Journal of Machine Learning Research
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We propose a modular solver for training Support Vector Machines(SVMs) in this paper. Based on the decomposition method(DM), which is the state-of-the-art way for training SVMs with nonlinear kernels, the new solver contains several modules, such as data representation, kernel function evaluation, problem construction, working set selection, sub problem solution, cache maintenance, etc.. The working set selection module contains two sub-modules: new indices selection and previous indices inheritance. Several popular working set selection rules are implemented. We also provide interface in this module. Users who are familiar with DM can add their new working set selection strategies. In the sub problem solution module, we design a Quadratic Programming(QP) solver interface. Any package that solves QP can be plugged in our framework. In particular, based on the structure of quadratic sub problem in SVMs, we develop a sub problem solution, which skillfully combines the projected gradient method and interior point method together. Numerical experiments on extensive data sets validate the correctness and efficiency of the proposed solver.