Nonlinear complementarity as unconstrained and constrained minimization
Mathematical Programming: Series A and B - Special issue: Festschrift in Honor of Philip Wolfe part II: studies in nonlinear programming
The nature of statistical learning theory
The nature of statistical learning theory
Handling concept drifts in incremental learning with support vector machines
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Data selection for support vector machine classifiers
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Learning from Data: Concepts, Theory, and Methods
Learning from Data: Concepts, Theory, and Methods
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Incremental Learning from Noisy Data
Machine Learning
Detecting Concept Drift with Support Vector Machines
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Estimating the Generalization Performance of an SVM Efficiently
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Lagrangian support vector machines
The Journal of Machine Learning Research
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Successive overrelaxation for support vector machines
IEEE Transactions on Neural Networks
Fast pruning superfluous support vectors in SVMs
Pattern Recognition Letters
Information Sciences: an International Journal
Hi-index | 0.10 |
Incremental learning has attracted more and more attention recently, both in theory and application. In this paper, the incremental learning algorithms for Lagrangian support vector machine (LSVM) are proposed. LSVM is an improvement to the standard linear SVM for classifications, which leads to the minimization of an unconstrained differentiable convex programming. The solution to this programming is obtained by an iteration scheme with a simple linear convergence. The inversion of the matrix in the solving algorithm is converted to the order of the original input space's dimensionality plus one at the beginning of the algorithm. The algorithm uses the Sherman-Morrison-Woodbury identity to reduce the computation time. The incremental learning algorithms for LSVM presented in this paper include two cases that are namely online and batch incremental learning. Because the inversion of the matrix after increment is solved based on the previous computed information, it is unnecessary to repeat the computing process. Experimental results show that the algorithms are superior to others.