ICML '06 Proceedings of the 23rd international conference on Machine learning
Discriminative learning can succeed where generative learning fails
Information Processing Letters
PAC-learnability of probabilistic deterministic finite state automata in terms of variation distance
Theoretical Computer Science
On the communication complexity of approximate nash equilibria
SAGT'12 Proceedings of the 5th international conference on Algorithmic Game Theory
Hi-index | 0.00 |
A classical approach in multi-class pattern classification is the following. Estimate the probability distributions that generated the observations for each label class, and then label new instances by applying the Bayes classifier to the estimated distributions. That approach provides more useful information than just a class label; it also provides estimates of the conditional distribution of class labels, in situations where there is class overlap. We would like to know whether it is harder to build accurate classifiers via this approach, than by techniques that may process all data with distinct labels together. In this paper we make that question precise by considering it in the context of PAC learnability. We propose two restrictions on the PAC learning framework that are intended to correspond with the above approach, and consider their relationship with standard PAC learning. Our main restriction of interest leads to some interesting algorithms that show that the restriction is not stronger (more restrictive) than various other well-known restrictions on PAC learning. An alternative slightly milder restriction turns out to be almost equivalent to unrestricted PAC learning.