A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
An equivalence between sparse approximation and support vector machines
Neural Computation
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Feature subset selection in large dimensionality domains
Pattern Recognition
Kernel uncorrelated discriminant analysis for radar target recognition
ICONIP'06 Proceedings of the 13th international conference on Neural Information Processing - Volume Part II
Hi-index | 0.00 |
Hard margin support vector machines (HM-SVMs) have a risk of getting overfitting in the presence of the noise. Soft margin SVMs deal with this problem by the introduction of the capacity control term and obtain the state of the art performance. However, this disposal leads to a relatively high computational cost. In this paper, an alternative method, greedy stagewise algorithm, named GS-SVMs is presented to deal with the overfitting of HM-SVMs without the introduction of capacity control term. The most attractive property of GS-SVMs is that its computational complexity scales quadratically with the size of training samples in the worst case. Extensive empirical comparisons confirm the feasibility and validity GS-SVMs.