Nonlinear Systems Modeling Using LS-SVM with SMO-Based Pruning Methods
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
An Adaptive Internal Model Control Based on LS-SVM
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks, Part III
Recursive reduced least squares support vector regression
Pattern Recognition
An effective method of pruning support vector machine classifiers
IEEE Transactions on Neural Networks
Local weighted LS-SVM online modeling and the application in continuous processes
AICI'10 Proceedings of the 2010 international conference on Artificial intelligence and computational intelligence: Part II
Improved conjugate gradient implementation for least squares support vector machines
Pattern Recognition Letters
Online independent reduced least squares support vector regression
Information Sciences: an International Journal
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Efficient sparse least squares support vector machines for pattern classification
Computers & Mathematics with Applications
Fast sparse approximation of extreme learning machine
Neurocomputing
Hi-index | 0.00 |
Solutions of least squares support vector machines (LS-SVMs) are typically nonsparse. The sparseness is imposed by subsequently omitting data that introduce the smallest training errors and retraining the remaining data. Iterative retraining requires more intensive computations than training a single nonsparse LS-SVM. In this paper, we propose a new pruning algorithm for sparse LS-SVMs: the sequential minimal optimization (SMO) method is introduced into pruning process; in addition, instead of determining the pruning points by errors, we omit the data points that will introduce minimum changes to a dual objective function. This new criterion is computationally efficient. The effectiveness of the proposed method in terms of computational cost and classification accuracy is demonstrated by numerical experiments.