Incremental learning with temporary memory

  • Authors:
  • Sanjay Jain;Steffen Lange;Samuel E. Moelius, III;Sandra Zilles

  • Affiliations:
  • Department of Computer Science, National University of Singapore, Singapore 117417, Republic of Singapore;Fachbereich Informatik, Hochschule Darmstadt, Haardtring 100, 64295 Darmstadt, Germany;Department of Computer & Information Sciences, University of Delaware, Newark, DE 19716, USA;Department of Computer Science, University of Regina, Regina, Saskatchewan, Canada S4S 0A2

  • Venue:
  • Theoretical Computer Science
  • Year:
  • 2010

Quantified Score

Hi-index 5.23

Visualization

Abstract

In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner's memory not only in how much data may be stored, but also in how long those data may be stored without being refreshed. More specifically, the model requires that, if the learner commits an example x to memory, and x is not presented to the learner again thereafter, then eventually the learner forgetsx, i.e., eventually x no longer appears in the learner's memory. This model is called temporary example memory (Tem) learning. Many interesting results concerning the Tem-learning model are presented. For example, there exists a class of languages that can be identified by memorizing k+1 examples in the Tem sense, but that cannot be identified by memorizing k examples in the Bem sense. On the other hand, there exists a class of languages that can be identified by memorizing just one example in the Bem sense, but that cannot be identified by memorizing any number of examples in the Tem sense. Results are also presented concerning the special case of learning classes of infinite languages.