Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Prudence and other conditions on formal language learning
Information and Computation
Computability, complexity, and languages (2nd ed.): fundamentals of theoretical computer science
Computability, complexity, and languages (2nd ed.): fundamentals of theoretical computer science
Regular Article: Open problems in “systems that learn”
Proceedings of the 30th IEEE symposium on Foundations of computer science
Language learning from texts: mindchanges, limited memory, and monotonicity
Information and Computation
Incremental learning from positive data
Journal of Computer and System Sciences
Incremental concept learning for bounded data mining
Information and Computation
A Machine-Independent Theory of the Complexity of Recursive Functions
Journal of the ACM (JACM)
The Power of Vacillation in Language Learning
SIAM Journal on Computing
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
Non U-shaped vacillatory and team learning
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Memory-limited u-shaped learning
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Results on memory-limited U-shaped learning
Information and Computation
Iterative learning from positive data and negative counterexamples
Information and Computation
U-shaped, iterative, and iterative-with-counter learning
Machine Learning
Parallelism Increases Iterative Learning Power
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
This paper solves an important problem left open in the literature by showing that U-shapes are unnecessary in iterative learning. A U-shape occurs when a learner first learns, then unlearns, and, finally, relearns, some target concept. Iterative learning is a Gold-style learning model in which each of a learner's output conjectures depends only upon the learner's just previous conjecture and upon the most recent input element. Previous results had shown, for example, that U-shapes are unnecessary for explanatory learning, but are necessary for behaviorally correct learning. Work on the aforementioned problem led to the consideration of an iterative-like learning model, in which each of a learner's conjectures may, in addition, depend upon the number of elements so far presented to the learner. Learners in this new model are strictly more powerful than traditional iterative learners, yet not as powerful as full explanatory learners. Can any class of languages learnable in this new model be learned without U-shapes? For now, this problem is left open.