PAC-like upper bounds for the sample complexity of leave-one-out cross-validation
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Cross-validation for binary classification by real-valued functions: theoretical analysis
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Learning the Dynamic Neural Networks with the Improvement of Generalization Capabilities
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Improving generalization capabilities of dynamic neural networks
Neural Computation
Genetic algorithm-based feature set partitioning for classification problems
Pattern Recognition
Genetic algorithm-based feature set partitioning for classification problems
Pattern Recognition
Hi-index | 0.00 |
This article addresses the question of whether some recentVapnik-Chervonenkis (VC) dimension-based bounds on samplecomplexity can be regarded as a practical design tool.Specifically, we are interested in bounds on the sample complexityfor the problem of training a pattern classifier such that we canexpect it to perform valid generalization. Early results using theVC dimension, while being extremely powerful, suffered from thefact that their sample complexity predictions were ratherimpractical. More recent results have begun to improve thesituation by attempting to take specific account of the precisealgorithm used to train the classifier. We perform a series ofexperiments based on a task involving the classification of sets ofvowel formant frequencies. The results of these experimentsindicate that the more recent theories provide sample complexitypredictions that are significantly more applicable in practice thanthose provided by earlier theories; however, we also find that therecent theories still have significant shortcomings.