Properties of support vector machines
Neural Computation
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Exact simplification of support vector solutions
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Core Vector Machines: Fast SVM Training on Very Large Data Sets
The Journal of Machine Learning Research
An Efficient Method for Simplifying Decision Functions of Support Vector Machines
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Bounds on Error Expectation for Support Vector Machines
Neural Computation
An incremental learning algorithm for Lagrangian support vector machines
Pattern Recognition Letters
Error tolerance based support vector machine for regression
Neurocomputing
A simplified multi-class support vector machine with reduced dual optimization
Pattern Recognition Letters
Incremental training of support vector machines
IEEE Transactions on Neural Networks
Pruning Support Vector Machines Without Altering Performances
IEEE Transactions on Neural Networks
Hi-index | 0.10 |
Support vector machines (SV machines, SVMs) often contain many SVs, which reduce runtime speeds of decision functions. To simplify the decision functions and improve SVM succinctness, the efforts to remove SVs in trained SVMs have been made. By meticulously designing some pruning coefficients and solving for the rest, this paper presents a simple method for fast removing superfluous SVs. The method empowers users to remove those SVs in a single iteration, thereby significantly enhancing the pruning speed of currently used methods, which remove the SVs one by one. The existence and uniqueness of the fast pruning coefficients are shown. The nexus of primal and dual optimizations is illustrated geometrically. The fast pruning method can also be applied to other kernel-based machines without any modifications. The computational complexity is discussed. Examples are given first and experiments on larger data sets demonstrate the effectiveness of the fast simplification method.