Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
A learning criterion for stochastic rules
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Probably almost Bayes decisions
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Polynomial learnability of probabilistic concepts with respect to the Kullback-Leibler divergence
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Hi-index | 0.00 |
In this paper, we investigate the problem of classifying objects which are given by feature vectors with Boolean or real entries. Our aim is to “(efficiently) learn probably almost optimal classifications” from examples. A classical approach in pattern recognition uses empirical estimations of the Bayesian discriminant functions for this purpose. We analyze this approach for different classes of distribution functions: In the Boolean case we look at the k-th order Bahadur-Lazarsfeld expansions and k-th order Chow expansions and in the continuous case at the class of normal distributions. In all cases, we obtain polynomial upper bounds for the required sample size. The bounds for the Boolean case improve and extend results from [FPS91].