Learning with Temporary Memory

  • Authors:
  • Steffen Lange;Samuel E. Moelius, Iii;Sandra Zilles

  • Affiliations:
  • Fachbereich Informatik, Hochschule Darmstadt,;Department of Computer & Information Sciences, University of Delaware,;Department of Computing Science, University of Alberta,

  • Venue:
  • ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner's memory not only in how muchdata may be retained, but also in how longthat data may be retained. More specifically, the model requires that, if a learner commits an example xto memory in some stage of the learning process, then there is some subsequent stage for which xno longerappears in the learner's memory. This model is called temporary example memory(Tem) learning. In some sense, it captures the idea that memories fade.Many interesting results concerning the Tem-learning model are presented. For example, there exists a class of languages that can be identified by memorizing k+ 1 examples in the Temsense, but that cannotbe identified by memorizing kexamples in the Bemsense. On the other hand, there exists a class of languages that can be identified by memorizing just1 examplein the Bemsense, but that cannotbe identified by memorizing any number of examplesin the Temsense. (The proof of this latter result involves an infinitary self-reference argument.) Results are also presented concerning the special cases of: learning indexableclasses of languages, and learning (arbitrary) classes of infinitelanguages.