Communications of the ACM
On the complexity of inductive inference
Information and Control
On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Learnability by fixed distributions
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Learning simple concepts under simple distributions
SIAM Journal on Computing
Proper learning algorithm for functions of k terms under smooth distributions
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Incremental learning from positive data
Journal of Computer and System Sciences
Learning one-variable pattern languages in linear average time
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
ICGI '98 Proceedings of the 4th International Colloquium on Grammatical Inference
An Average-Case Analysis of k-Nearest Neighbor Classifier
ICCBR '95 Proceedings of the First International Conference on Case-Based Reasoning Research and Development
PAC Learning under Helpful Distributions
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
Stochastic Finite Learning of the Pattern Languages
Machine Learning
SAGA '01 Proceedings of the International Symposium on Stochastic Algorithms: Foundations and Applications
From Computational Learning Theory to Discovery Science
ICAL '99 Proceedings of the 26th International Colloquium on Automata, Languages and Programming
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
Effects of domain characteristics on instance-based learning algorithms
Theoretical Computer Science - Selected papers in honour of Setsuo Arikawa
From learning in the limit to stochastic finite learning
Theoretical Computer Science - Algorithmic learning theory
On evolvability: the swapping algorithm, product distributions, and covariance
SAGA'09 Proceedings of the 5th international conference on Stochastic algorithms: foundations and applications
Hi-index | 0.00 |
We advocate to analyze the average complexity of learning problems. An appropriate framework for this purpose is introduced. Based on it we consider the problem of learning monomials and the special case of learning monotone monomials in the limit and for on-line predictions in two variants: from positive data only, and from positive and negative examples. The well-known Wholist algorithm is completely analyzed, in particular its average-case behavior with respect to the class of binomial distributions. We consider different complexity measures: the number of mind changes, the number of prediction errors, and the total learning time. Tight bounds are obtained implying that worst case bounds are too pessimistic. On the average learning can be achieved exponentially faster. Furthermore, we study a new learning model, stochastic finite learning, in which, in contrast to PAC learning, some information about the underlying distribution is given and the goal is to find a correct (not only approximatively correct) hypothesis. We develop techniques to obtain good bounds for stochastic finite learning from a precise average case analysis of strategies for learning in the limit and illustrate our approach for the case of learning monomials.