Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Prudence and other conditions on formal language learning
Information and Computation
Characterizations of monotonic and dual monotonic language learning
Information and Computation
Angluin's theorem for indexed families of r.e. sets and applications
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
The synthesis of language learners
Information and Computation
The Power of Vacillation in Language Learning
SIAM Journal on Computing
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
A Guided Tour Across the Boundaries of Learning Recursive Languages
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Learning indexed families of recursive languages from positive data: A survey
Theoretical Computer Science
Prescribed learning of r.e. classes
Theoretical Computer Science
Hi-index | 0.00 |
The object of investigation in this paper is the learnability of co-recursively enumerable (co-r.e.) languages based on Gold's [11] original model of inductive inference. In particular, the following learning models are studied: finite learning, explanatory learning, vacillatory learning and behaviourally correct learning. The relative effects of imposing further learning constraints, such as conservativeness and prudence on these various learning models are also investigated. Moreover, an extension of Angluin's [1] characterisation of identifiable indexed families of recursive languages to families of conservatively learnable co-r.e. classes is presented. In this connection, the paper considers the learnability of indexed families of recursive languages, uniformly co-r.e. classes as well as other general classes of co-r.e. languages. A containment hierarchy of co-r.e. learning models is thereby established; while this hierarchy is quite similar to its r.e. analogue, there are some surprising collapses when using a co-r.e. hypothesis space; for example vacillatory learning collapses to explanatory learning.