Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Prudence and other conditions on formal language learning
Information and Computation
Language learning from stochastic input
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Annals of Mathematics and Artificial Intelligence
ICGI '98 Proceedings of the 4th International Colloquium on Grammatical Inference
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
A complete and tight average-case analysis of learning monomials
STACS'99 Proceedings of the 16th annual conference on Theoretical aspects of computer science
Identification of Function Distinguishable Languages
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
Hi-index | 0.00 |
Learning in the limit deals mainly with the question of what can be learned, but not very often with the question of how fast. The purpose of this paper is to develop a learning model that stays very close to Gold's model, but enables questions on the speed of convergence to be answered. In order to do this, we have to assume that positive examples are generated by some stochastic model. If the stochastic model is fixed (measure one learning), then all recursively enumerable sets are identifiable, while straying greatly from Gold's model. In contrast, we define learning from random text as identifying a class of languages for every stochastic model where examples are generated independently and identically distributed. As it turns out, this model stays close to learning in the limit. We compare both models keeping several aspects in mind, particularly when restricted to several strategies and to the existence of locking sequences. Lastly, we present some results on the speed of convergence: In general, convergence can be arbitrarily slow, but for recursive learners, it cannot be slower than some magic function. Every language can be learned with exponentially small tail bounds, which are also the best possible. All results apply fully to Gold-style learners, since his model is a proper subset of learning from random text.