The nature of statistical learning theory
The nature of statistical learning theory
Making large-scale support vector machine learning practical
Advances in kernel methods
Using analytic QP and sparseness to speed training of support vector machines
Proceedings of the 1998 conference on Advances in neural information processing systems II
Support vector machines: hype or hallelujah?
ACM SIGKDD Explorations Newsletter - Special issue on “Scalable data mining algorithms”
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Efficient SVM Regression Training with SMO
Machine Learning
Convergence of a Generalized SMO Algorithm for SVM Classifier Design
Machine Learning
Training Invariant Support Vector Machines
Machine Learning
A New Strategy for Selecting Working Sets Applied in SMO
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 3 - Volume 3
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Hi-index | 0.00 |
In the methods for training Support Vector Machines (SVM), precomputed elements in the Hessian matrix are usually cached in order to avoid recomputation. However, the least-recent-used replacement algorithm that is widely used is not suitable since the elements are requested in an irregular way. A new cache replacement algorithm applied in Sequential Minimal Optimization (SMO) is presented in the paper. The item corresponding to the component with minimal violation of the Karush-Kuhn-Tucher (KKT) condition is deleted to make room for new one when the cache is full. It is shown in the experiments that the hit ratio of the cache is improved compared with LRU cache while the training time can be reduced in the tasks where the computation of elements in Hessian matrix is very time-consuming.