The nature of statistical learning theory
The nature of statistical learning theory
Advances in kernel methods: support vector learning
Advances in kernel methods: support vector learning
Solving the quadratic programming problem arising in support vector classification
Advances in kernel methods
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Support vector machines: training and applications
Support vector machines: training and applications
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Successive overrelaxation for support vector machines
IEEE Transactions on Neural Networks
A Simple Decomposition Method for Support Vector Machines
Machine Learning
A note on the decomposition methods for support vector regression
Neural Computation
Training v-support vector regression: theory and algorithms
Neural Computation
A New Cache Replacement Algorithm in SMO
SVM '02 Proceedings of the First International Workshop on Pattern Recognition with Support Vector Machines
Neighborhood Property--Based Pattern Selection for Support Vector Machines
Neural Computation
QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines
The Journal of Machine Learning Research
Incremental Support Vector Learning: Analysis, Implementation and Applications
The Journal of Machine Learning Research
General Polynomial Time Decomposition Algorithms
The Journal of Machine Learning Research
AusDM '06 Proceedings of the fifth Australasian conference on Data mining and analystics - Volume 61
On the complexity of working set selection
Theoretical Computer Science
General polynomial time decomposition algorithms
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Hi-index | 0.00 |
The article presents a general view of a class of decomposition algorithms for training Support Vector Machines (SVM) which are motivated by the method of feasible directions. The first such algorithm for the pattern recognition SVM has been proposed in Joachims, T. (1999, Schölkopf et al. (Eds.) Advances in kernel methods-Support vector learning (pp. 185–208). MIT Press). Its extension to the regression SVM—the maximal inconsistency algorithm—has been recently presented by the author (Laskov, 2000, Solla, Leen, & Müller (Eds.) Advances in neural information processing systems 12 (pp. 484–490). MIT Press). A detailed account of both algorithms is carried out, complemented by theoretical investigation of the relationship between the two algorithms. It is proved that the two algorithms are equivalent for the pattern recognition SVM, and the feasible direction interpretation of the maximal inconsistency algorithm is given for the regression SVM. The experimental results demonstrate an order of magnitude decrease of training time in comparison with training without decomposition, and, most importantly, provide experimental evidence of the linear convergence rate of the feasible direction decomposition algorithms.