Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Recursively enumerable sets and degrees
Recursively enumerable sets and degrees
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Monotonic and non-monotonic inductive inference
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
On the non-existence of maximal inference degrees for language identification
Information Processing Letters
Regular Article: Open problems in “systems that learn”
Proceedings of the 30th IEEE symposium on Foundations of computer science
Language learning from texts: mindchanges, limited memory, and monotonicity
Information and Computation
On the impact of forgetting on learning machines
Journal of the ACM (JACM)
Incremental learning from positive data
Journal of Computer and System Sciences
Incremental concept learning for bounded data mining
Information and Computation
A Machine-Independent Theory of the Complexity of Recursive Functions
Journal of the ACM (JACM)
The Power of Vacillation in Language Learning
SIAM Journal on Computing
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Probabilistic versus Deterministic Memory Limited Learning
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
A Thesis in Inductive Inference
Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic
Variations on U-shaped learning
Information and Computation
U-shaped, iterative, and iterative-with-counter learning
COLT'07 Proceedings of the 20th annual conference on Learning theory
Non U-shaped vacillatory and team learning
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
U-shaped, iterative, and iterative-with-counter learning
Machine Learning
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Learning with Temporary Memory
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Parallelism increases iterative learning power
Theoretical Computer Science
Iterative learning of simple external contextual languages
Theoretical Computer Science
Incremental learning with temporary memory
Theoretical Computer Science
Solutions to open questions for non-u-shaped learning with memory limitations
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Optimal language learning from positive data
Information and Computation
Iterative learning from positive data and counters
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Memory-limited non-U-shaped learning with solved open problems
Theoretical Computer Science
Theoretical Computer Science
Iterative learning from positive data and counters
Theoretical Computer Science
Hi-index | 0.01 |
U-shaped learning is a learning behaviour in which the learner first learns a given target behaviour, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of past-tenses of English verbs, has been widely discussed among psychologists and cognitive scientists as a fundamental example of the non-monotonicity of learning. Previous theory literature has studied whether or not U-shaped learning, in the context of Gold's formal model of learning languages from positive data, is necessary for learning some tasks. It is clear that human learning involves memory limitations. In the present paper we consider, then, the question of the necessity of U-shaped learning for some learning models featuring memory limitations. Our results show that the question of the necessity of U-shaped learning in this memory-limited setting depends on delicate tradeoffs between the learner's ability to remember its own previous conjecture, to store some values in its long-term memory, to make queries about whether or not items occur in previously seen data and on the learner's choice of hypotheses space.