A resource-allocating network for function interpolation
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
Reduced Rank Kernel Ridge Regression
Neural Processing Letters
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Incremental Learning with Support Vector Machines
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Incremental Support Vector Machine Learning: A Local Approach
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Lagrangian support vector machines
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Neural Computation
IEEE Transactions on Signal Processing
The kernel recursive least-squares algorithm
IEEE Transactions on Signal Processing
An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
ABR traffic management using minimal resource allocation (neural) networks
Computer Communications
Improvements to the SMO algorithm for SVM regression
IEEE Transactions on Neural Networks
A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation
IEEE Transactions on Neural Networks
A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks
IEEE Transactions on Neural Networks
Fast pruning superfluous support vectors in SVMs
Pattern Recognition Letters
Finite Newton method for implicit Lagrangian support vector regression
International Journal of Knowledge-based and Intelligent Engineering Systems
Hi-index | 0.01 |
Most existing online algorithms in support vector machines (SVM) can only grow support vectors. This paper proposes an online error tolerance based support vector machine (ET-SVM) which not only grows but also prunes support vectors. Similar to least square support vector machines (LS-SVM), ET-SVM converts the original quadratic program (QP) in standard SVM into a group of easily solved linear equations. Different from LS-SVM, ET-SVM remains support vectors sparse and realizes a compact structure. Thus, ET-SVM can significantly reduce computational time while ensuring satisfactory learning accuracy. Simulation results verify the effectiveness of the newly proposed algorithm.