The nature of statistical learning theory
The nature of statistical learning theory
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Generalization error bounds for Bayesian mixture algorithms
The Journal of Machine Learning Research
Generalization Error Bounds for Threshold Decision Lists
The Journal of Machine Learning Research
A Geometric Approach to Multi-Criterion Reinforcement Learning
The Journal of Machine Learning Research
Computable Shell Decomposition Bounds
The Journal of Machine Learning Research
Robust Boosting for Learning from Few Examples
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Hi-index | 0.00 |
Estimating the generalization capability is one of the most important problems in supervised learning. That is why, various generalization error estimators have been proposed in the literature. In this paper we propose an approach based on randomly generated objects to enhance the quality of training step of a standard SVM multi-class classifier and consequently try to reduce its generalization error. The idea is to generate artificial test samples which help automatic classifiers learn from their mistakes by reintroducing the misclassified examples in training set. But adding misclassified examples to the training set will induce a more complex quadratic program on which the decision rule is based. To overcome this complexity, while additional learning vectors are introduced, we integrated the idea of incremental training to our method.