Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Prudence and other conditions on formal language learning
Information and Computation
Monotonic and non-monotonic inductive inference
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
Types of monotonic language learning and their characterization
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Characterizing language identification by standardizing operations
Journal of Computer and System Sciences
Language learning from texts: mindchanges, limited memory, and monotonicity
Information and Computation
Monotonic and dual monotonic language learning
Theoretical Computer Science
A Machine-Independent Theory of the Complexity of Recursive Functions
Journal of the ACM (JACM)
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
An Introduction to the General Theory of Algorithms
An Introduction to the General Theory of Algorithms
Learning Logical Definitions from Relations
Machine Learning
Language Learning without Overgeneralization
STACS '92 Proceedings of the 9th Annual Symposium on Theoretical Aspects of Computer Science
Language Learning with a Bounded Number of Mind Changes
STACS '93 Proceedings of the 10th Annual Symposium on Theoretical Aspects of Computer Science
Characterization of Finite Identification
AII '92 Proceedings of the International Workshop on Analogical and Inductive Inference
Inductive Inference with Bounded Mind Changes
ALT '92 Proceedings of the Third Workshop on Algorithmic Learning Theory
Uniform Charakterizations of Various Kinds of Language Learning
ALT '93 Proceedings of the 4th International Workshop on Algorithmic Learning Theory
Inductive Inference Machines That Can Refute Hypothesis Spaces
ALT '93 Proceedings of the 4th International Workshop on Algorithmic Learning Theory
A Thesis in Inductive Inference
Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic
Monotonic Versus Nonmonotonic Language Learning
Proceedings of the Second International Workshop on Nonmonotonic and Inductive Logic
ALT '92 Proceedings of the Third Workshop on Algorithmic Learning Theory
Aspects of complexity of probabilistic learning under monotonicity constraints
Theoretical Computer Science - Algorithmic learning theory
Information and Computation
Non-U-shaped vacillatory and team learning
Journal of Computer and System Sciences
Learning indexed families of recursive languages from positive data: A survey
Theoretical Computer Science
Learning and extending sublanguages
Theoretical Computer Science
Hi-index | 0.02 |
Overgeneralization is a major issue in the identification of grammars for formal languages from positive data. Different formulations of generalization and specialization strategies have been proposed to address this problem, and recently there has been a flurry of activity investigating such strategies in the context of indexed families of recursive languages. The present paper studies the power of these strategies to learn recursively enumerable languages from positive data. In particular, the power of strong‐monotonic, monotonic, and weak‐monotonic (together with their dual notions modeling specialization) strategies are investigated for identification of r.e. languages. These investigations turn out to be different from the previous investigations on learning indexed families of recursive languages and at times require new proof techniques. A complete picture is provided for the relative power of each of the strategies considered. An interesting consequence is that the power of weak‐monotonic strategies is equivalent to that of conservative strategies. This result parallels the scenario for indexed classes of recursive languages. It is also shown that any identifiable collection of r.e. languages can also be identified by a strategy that exhibits the dual of weak‐monotonic property. An immediate consequence of the proof of this result is that if attention is restricted to infinite r.e. languages, then conservative strategies can identify every identifiable collection.