An Error Bound Based on a Worst Likely Assignment
The Journal of Machine Learning Research
Hi-index | 0.00 |
This paper develops probabilistic bounds on out-of-sample error rates for several classifiers using a single set of in-sample data. The bounds are based on probabilities over partitions of the union of in-sample and out-of-sample data into in-sample and out-of-sample data sets, The bounds apply when in-sample and out-of-sample data are drawn from the same distribution. Partition-based bounds are stronger than the Vapnik-Chervonenkis bounds, but they require more computation