Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Prudence and other conditions on formal language learning
Information and Computation
Monotonic and non-monotonic inductive inference
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
On the power of inductive inference from good examples
Theoretical Computer Science
Regular Article: Open problems in “systems that learn”
Proceedings of the 30th IEEE symposium on Foundations of computer science
Language learning with some negative information
Journal of Computer and System Sciences
Language learning from texts: mindchanges, limited memory, and monotonicity
Information and Computation
The Power of Vacillation in Language Learning
SIAM Journal on Computing
Generalization and specialization strategies for learning r.e. languages
Annals of Mathematics and Artificial Intelligence
Characterization Problems in the Theory of Inductive Inference
Proceedings of the Fifth Colloquium on Automata, Languages and Programming
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
Monotonic Versus Nonmonotonic Language Learning
Proceedings of the Second International Workshop on Nonmonotonic and Inductive Logic
Variations on U-shaped learning
Information and Computation
Non U-shaped vacillatory and team learning
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Memory-limited u-shaped learning
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Some recent results in u-shaped learning
TAMC'06 Proceedings of the Third international conference on Theory and Applications of Models of Computation
Learning in Friedberg numberings
Information and Computation
Prescribed Learning of Indexed Families
Fundamenta Informaticae
U-shaped, iterative, and iterative-with-counter learning
Machine Learning
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Prescribed learning of r.e. classes
Theoretical Computer Science
Hypothesis Spaces for Learning
LATA '09 Proceedings of the 3rd International Conference on Language and Automata Theory and Applications
Solutions to open questions for non-u-shaped learning with memory limitations
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Hypothesis spaces for learning
Information and Computation
Optimal language learning from positive data
Information and Computation
Iterative learning from positive data and counters
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Prescribed Learning of Indexed Families
Fundamenta Informaticae
Memory-limited non-U-shaped learning with solved open problems
Theoretical Computer Science
Iterative learning from positive data and counters
Theoretical Computer Science
Hi-index | 0.00 |
Overregularization seen in child language learning, for example, verb tense constructs, involves abandoning correct behaviours for incorrect ones and later reverting to correct behaviours. Quite a number of other child development phenomena also follow this U-shaped form of learning, unlearning and relearning. A decisive learner does not do this and, more generally, never abandons an hypothesis H for an inequivalent one where it later conjectures an hypothesis equivalent to H, where equivalence means semantical or behavioural equivalence. The first main result of the present paper entails that decisiveness is a real restriction on Gold's model of explanatory (or in the limit) learning of grammars for languages from positive data. This result also solves an open problem posed in 1986 by Osherson, Stob and Weinstein. Second-time decisive learners semantically conjecture each of their hypotheses for any language at most twice. By contrast, such learners are shown not to restrict Gold's model of learning. Non U-shaped learning liberalizes the requirement of decisiveness from being a restriction on all hypotheses output to the same restriction but only on correct hypotheses. The situation regarding learning power for non U-shaped learning is a little more complex than that for decisiveness. This is explained shortly below. Gold's original model for learning grammars from positive data, called EX-learning, requires, for success, syntactic convergence to a correct grammar. A slight variant, called BC-learning, requires only semantic convergence to a sequence of correct grammars that need not be syntactically identical to one another. The second main result says that non U-shaped learning does not restrict EX-learning. However, from an argument of Fulk, Jain and Osherson, non U-shaped learning does restrict BC-learning. In the final section is discussed the possible meaning, for cognitive science, of these results and, in this regard, indicated are some avenues worthy of future investigation.