Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Duality and Geometry in SVM Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A general soft method for learning SVM classifiers with L1-norm penalty
Pattern Recognition
Simple solvers for large quadratic programming tasks
PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
On the generalization of soft margin algorithms
IEEE Transactions on Information Theory
A fast iterative nearest point algorithm for support vector machine classifier design
IEEE Transactions on Neural Networks
A geometric approach to Support Vector Machine (SVM) classification
IEEE Transactions on Neural Networks
A fast SVM training algorithm based on a decision tree data filter
MICAI'11 Proceedings of the 10th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
Hi-index | 0.00 |
It is well known that linear slack penalty SVM training is equivalent to solving the Nearest Point Problem (NPP) over the so-called μ-Reduced Convex Hulls, that is, convex combinations of the positive and negative samples with coefficients bounded by a μμ-Reduced Convex Hulls. Although the extended GSK algorithm does not perform as well as the more complex recent proposal by Mavroforakis and Theodoridis, clipping MDM coefficient updates results in a fast and efficient algorithm.