Machine Learning
Making large-scale support vector machine learning practical
Advances in kernel methods
Beating the hold-out: bounds for K-fold and progressive cross-validation
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
PAC-Bayesian Stochastic Model Selection
Machine Learning
A tighter error bound for decision tree learning using PAC learnability
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Artificial Intelligence in Medicine
Bayesian hypothesis testing for pattern discrimination in brain decoding
Pattern Recognition
Hi-index | 0.00 |
We investigate the empirical applicability of several bounds (a number of which are new) on the true error rate of learned classifiers which hold whenever the examples are chosen independently at random from a fixed distribution.The collection of tricks we use includes:1. A technique using unlabeled data for a tight derandomization of randomized bounds.2. A tight form of the progressive validation bound.3. The exact form of the test set bound.The bounds are implemented in the semibound package and are freely available.