Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Prudence and other conditions on formal language learning
Information and Computation
Reinforcement learning with hidden states
Proceedings of the second international conference on From animals to animats 2 : simulation of adaptive behavior: simulation of adaptive behavior
Language learning from texts: mindchanges, limited memory, and monotonicity
Information and Computation
Incremental learning from positive data
Journal of Computer and System Sciences
Incremental concept learning for bounded data mining
Information and Computation
Machine Learning
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
Neural Computation
Results on memory-limited U-shaped learning
Information and Computation
Learning indexed families of recursive languages from positive data: A survey
Theoretical Computer Science
Learning in Friedberg numberings
Information and Computation
Solutions to open questions for non-u-shaped learning with memory limitations
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Memory-limited non-U-shaped learning with solved open problems
Theoretical Computer Science
Theoretical Computer Science
Hi-index | 5.23 |
In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner's memory not only in how much data may be stored, but also in how long those data may be stored without being refreshed. More specifically, the model requires that, if the learner commits an example x to memory, and x is not presented to the learner again thereafter, then eventually the learner forgetsx, i.e., eventually x no longer appears in the learner's memory. This model is called temporary example memory (Tem) learning. Many interesting results concerning the Tem-learning model are presented. For example, there exists a class of languages that can be identified by memorizing k+1 examples in the Tem sense, but that cannot be identified by memorizing k examples in the Bem sense. On the other hand, there exists a class of languages that can be identified by memorizing just one example in the Bem sense, but that cannot be identified by memorizing any number of examples in the Tem sense. Results are also presented concerning the special case of learning classes of infinite languages.