Machine Learning
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Efficient computations for large least square support vector machine classifiers
Pattern Recognition Letters
SMO algorithm for least-squares SVM formulations
Neural Computation
Benchmarking Least Squares Support Vector Machine Classifiers
Machine Learning
A tutorial on support vector regression
Statistics and Computing
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
On the Equivalence of the SMO and MDM Algorithms for SVM Training
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Recursive reduced least squares support vector regression
Pattern Recognition
First and Second Order SMO Algorithms for LS-SVM Classifiers
Neural Processing Letters
Pruning error minimization in least squares support vector machines
IEEE Transactions on Neural Networks
An improved conjugate gradient scheme to the solution of least squares SVM
IEEE Transactions on Neural Networks
SMO-based pruning methods for sparse least squares support vector machines
IEEE Transactions on Neural Networks
Reduced Support Vector Machines: A Statistical Theory
IEEE Transactions on Neural Networks
Comments on “Pruning Error Minimization in Least Squares Support Vector Machines”
IEEE Transactions on Neural Networks
Fast Sparse Approximation for Least Squares Support Vector Machine
IEEE Transactions on Neural Networks
Hi-index | 0.10 |
As a promising method for pattern recognition and function estimation, least squares support vector machines (LS-SVM) express the training in terms of solving a linear system instead of a quadratic programming problem as for conventional support vector machines (SVM). In this paper, by using the information provided by the equality constraint, we transform the minimization problem with a single equality constraint in LS-SVM into an unconstrained minimization problem, then propose reduced formulations for LS-SVM. By introducing this transformation, the times of using conjugate gradient (CG) method, which is a greatly time-consuming step in obtaining the numerical solution, are reduced to one instead of two as proposed by Suykens et al. (1999). The comparison on computational speed of our method with the CG method proposed by Suykens et al. and the first order and second order SMO methods on several benchmark data sets shows a reduction of training time by up to 44%.