Properties of support vector machines
Neural Computation
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Reducing the run-time complexity in support vector machines
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Introduction to Algorithms
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
An Efficient k-Means Clustering Algorithm: Analysis and Implementation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Asymptotic behaviors of support vector machines with Gaussian kernel
Neural Computation
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Exact simplification of support vector solutions
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Decomposition methods for linear support vector machines
Neural Computation
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Leave-One-Out Bounds for Support Vector Regression Model Selection
Neural Computation
An Efficient Method for Simplifying Decision Functions of Support Vector Machines
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Bounds on Error Expectation for Support Vector Machines
Neural Computation
Building Support Vector Machines with Reduced Classifier Complexity
The Journal of Machine Learning Research
A dual coordinate descent method for large-scale linear SVM
Proceedings of the 25th international conference on Machine learning
LIBLINEAR: A Library for Large Linear Classification
The Journal of Machine Learning Research
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
The kernel recursive least-squares algorithm
IEEE Transactions on Signal Processing
Online Kernel-Based Classification Using Adaptive Projection Algorithms
IEEE Transactions on Signal Processing - Part I
Asymptotic convergence of an SMO algorithm without any assumptions
IEEE Transactions on Neural Networks
Active set support vector regression
IEEE Transactions on Neural Networks
Incremental training of support vector machines
IEEE Transactions on Neural Networks
Survey of clustering algorithms
IEEE Transactions on Neural Networks
SMO-based pruning methods for sparse least squares support vector machines
IEEE Transactions on Neural Networks
A study on SMO-type decomposition methods for support vector machines
IEEE Transactions on Neural Networks
Fast Sparse Approximation for Least Squares Support Vector Machine
IEEE Transactions on Neural Networks
On the Convergence of Multiplicative Update Algorithms for Nonnegative Matrix Factorization
IEEE Transactions on Neural Networks
Pruning Support Vector Machines Without Altering Performances
IEEE Transactions on Neural Networks
Online independent reduced least squares support vector regression
Information Sciences: an International Journal
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Hi-index | 0.01 |
Support vector machine (SVM) classifiers often contain many SVs, which lead to high computational cost at runtime and potential overfitting. In this paper, a practical and effective method of pruning SVM classifiers is systematically developed. The kernel row vectors, with one-to-one correspondence to the SVs, are first organized into clusters. The pruning work is divided into two phases. In the first phase, orthogonal projections (OPs) are performed to find kernel row vectors that can be approximated by the others. In the second phase, the previously found vectors are removed, and crosswise propagations, which simply utilize the coefficients of OPs, are implemented within each cluster. The method circumvents the problem of explicitly discerning SVs in the high-dimensional feature space as the SVM formulation does, and does not involve local minima. With different parameters, 3000 experiments were run on the LibSVM software platform. After pruning 42% of the SVs, the average change in classification accuracy was only 0.7%, and the average computation time for removing one SV was 0.006 of the training time. In some scenarios, over 90% of the SVs were pruned with less than 0.1% reduction in classification accuracy. The experiments demonstrate the existence of large numbers of superabundant SVs in trained SVMs, and suggest a synergistic use of training and pruning in practice. Many SVMs already used in applications could be upgraded by pruning nearly half of their SVs.