Prudence and other conditions on formal language learning
Information and Computation
Regular Article: Open problems in “systems that learn”
Proceedings of the 30th IEEE symposium on Foundations of computer science
Language learning from texts: mindchanges, limited memory, and monotonicity
Information and Computation
On the impact of forgetting on learning machines
Journal of the ACM (JACM)
Incremental learning from positive data
Journal of Computer and System Sciences
Incremental concept learning for bounded data mining
Information and Computation
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
Monotonic and Nonmonotonic Inductive Inference of Functions and Patterns
Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic
A Thesis in Inductive Inference
Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic
Monotonic Versus Nonmonotonic Language Learning
Proceedings of the Second International Workshop on Nonmonotonic and Inductive Logic
Variations on U-shaped learning
Information and Computation
Results on memory-limited U-shaped learning
Information and Computation
Information and Computation
Non-U-shaped vacillatory and team learning
Journal of Computer and System Sciences
U-shaped, iterative, and iterative-with-counter learning
Machine Learning
Dynamic Modeling in Inductive Inference
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Abstraction and complexity in computational learning in the limit
Abstraction and complexity in computational learning in the limit
Incremental learning with temporary memory
Theoretical Computer Science
Iterative learning from texts and counterexamples using additional information
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Solutions to open questions for non-u-shaped learning with memory limitations
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Hi-index | 5.23 |
In empirical cognitive science, for human learning, a semantic or behavioral U-shape occurs when a learner first learns, then unlearns, and, finally, relearns, some target concept. Within the formal framework of Inductive Inference, for learning from positive data, previous results have shown, for example, that such U-shapes are unnecessary for explanatory learning, but are necessary for behaviorally correct and non-trivial vacillatory learning. Herein we also distinguish between semantic and syntactic U-shapes. We answer a number of open questions in the prior literature as well as provide new results regarding syntactic U-shapes. Importantly for cognitive science, we see more of a previously noticed pattern that, for parameterized learning criteria, beyond very few initial parameter values, U-shapes are necessary for full learning power. We analyze the necessity of U-shapes in two memory-limited settings. The first setting is Bounded Memory State (BMS) learning, where a learner has an explicitly-bounded state memory, and otherwise only knows its current datum. We show that there are classes learnable with three (or more) memory states that are not learnable non-U-shapedly with any finite number of memory states. This result is surprising, since, for learning with one or two memory states, U-shapes are known to be unnecessary. This solves an open question from the literature. The second setting is that of Memoryless Feedback (MLF) learning, where a learner may ask a bounded number of questions about what data has been seen so far, and otherwise only knows its current datum. We show that there is a class learnable memorylessly with a single feedback query such that this class is not learnable non-U-shapedly memorylessly with any finite number of feedback queries. We employ self-learning classes together with the Operator Recursion Theorem for many of our results, but we also introduce two new techniques for obtaining results. The first is for transferring inclusion results from one setting to another. The main part of the second is the Hybrid Operator Recursion Theorem, which enables us to separate some learning criteria featuring complexity-bounded learners, employing self-learning classes. Both techniques are not specific to U-shaped learning, but applicable for a wide range of settings.