The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Neural and Adaptive Systems: Fundamentals through Simulations with CD-ROM
Neural and Adaptive Systems: Fundamentals through Simulations with CD-ROM
Efficient SVM Regression Training with SMO
Machine Learning
A Simple Decomposition Method for Support Vector Machines
Machine Learning
Text Categorization with Suport Vector Machines: Learning with Many Relevant Features
ECML '98 Proceedings of the 10th European Conference on Machine Learning
A New Strategy for Selecting Working Sets Applied in SMO
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 3 - Volume 3
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Text mining techniques for patent analysis
Information Processing and Management: an International Journal
Maximum-Gain Working Set Selection for SVMs
The Journal of Machine Learning Research
On the complexity of working set selection
Theoretical Computer Science
Polynomial kernel adaptation and extensions to the SVM classifier learning
Neural Computing and Applications
Text classification from unlabeled documents with bootstrapping and feature projection techniques
Information Processing and Management: an International Journal
On the convergence of the decomposition method for support vector machines
IEEE Transactions on Neural Networks
A study on SMO-type decomposition methods for support vector machines
IEEE Transactions on Neural Networks
Working Set Selection Using Functional Gain for LS-SVM
IEEE Transactions on Neural Networks
Skin cancer extraction with optimum fuzzy thresholding technique
Applied Intelligence
Hi-index | 0.00 |
Sequential minimal optimization (SMO) is quite an efficient algorithm for training the support vector machine. The most important step of this algorithm is the selection of the working set, which greatly affects the training speed. The feasible direction strategy for the working set selection can decrease the objective function, however, may augment to the total calculation for selecting the working set in each of the iteration. In this paper, a new candidate working set (CWS) Strategy is presented considering the cost on the working set selection and cache performance. This new strategy can select several greatest violating samples from Cache as the iterative working sets for the next several optimizing steps, which can improve the efficiency of the kernel cache usage and reduce the computational cost related to the working set selection. The results of the theory analysis and experiments demonstrate that the proposed method can reduce the training time, especially on the large-scale datasets.