Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Prudence and other conditions on formal language learning
Information and Computation
A Machine-Independent Theory of the Complexity of Recursive Functions
Journal of the ACM (JACM)
Inductive Inference: Theory and Methods
ACM Computing Surveys (CSUR)
Inductive Inference, DFAs, and Computational Complexity
AII '89 Proceedings of the International Workshop on Analogical and Inductive Inference
A Thesis in Inductive Inference
Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic
Results on memory-limited U-shaped learning
Information and Computation
Information and Computation
Non-U-shaped vacillatory and team learning
Journal of Computer and System Sciences
U-shaped, iterative, and iterative-with-counter learning
Machine Learning
Solutions to open questions for non-u-shaped learning with memory limitations
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Hi-index | 0.00 |
Gold@?s original paper on inductive inference introduced a notion of an optimal learner. Intuitively, a learner identifies a class of objects optimally iff there is no other learner that: requires as little of each presentation of each object in the class in order to identify that object, and, for some presentation of some object in the class, requires less of that presentation in order to identify that object. Beick considered this notion in the context of function learning, and gave an intuitive characterization of an optimal function learner. Jantke and Beick subsequently characterized the classes of functions that are algorithmically, optimally identifiable. Herein, Gold@?s notion is considered in the context of language learning. It is shown that a characterization of optimal language learners analogous to Beick@?s does not hold. It is also shown that the classes of languages that are algorithmically, optimally identifiable cannot be characterized in a manner analogous to that of Jantke and Beick. Other interesting results concerning optimal language learning include the following. It is shown that strong non-U-shapedness, a property involved in Beick@?s characterization of optimal function learners, does not restrict algorithmic language learning power. It is also shown that, for an arbitrary optimal learner F of a class of languages L, F optimally identifies a subclass K of L iff F is class-preserving with respect to K.