Optimal language learning from positive data

  • Authors:
  • John Case;Samuel E. Moelius, III

  • Affiliations:
  • Department of Computer & Information Sciences, University of Delaware, 101 Smith Hall, Newark, DE 19716, United States;IDA Center for Computing Sciences, 17100 Science Drive, Bowie, MD 20715-4300, United States

  • Venue:
  • Information and Computation
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Gold@?s original paper on inductive inference introduced a notion of an optimal learner. Intuitively, a learner identifies a class of objects optimally iff there is no other learner that: requires as little of each presentation of each object in the class in order to identify that object, and, for some presentation of some object in the class, requires less of that presentation in order to identify that object. Beick considered this notion in the context of function learning, and gave an intuitive characterization of an optimal function learner. Jantke and Beick subsequently characterized the classes of functions that are algorithmically, optimally identifiable. Herein, Gold@?s notion is considered in the context of language learning. It is shown that a characterization of optimal language learners analogous to Beick@?s does not hold. It is also shown that the classes of languages that are algorithmically, optimally identifiable cannot be characterized in a manner analogous to that of Jantke and Beick. Other interesting results concerning optimal language learning include the following. It is shown that strong non-U-shapedness, a property involved in Beick@?s characterization of optimal function learners, does not restrict algorithmic language learning power. It is also shown that, for an arbitrary optimal learner F of a class of languages L, F optimally identifies a subclass K of L iff F is class-preserving with respect to K.