Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Aggregating inductive expertise
Information and Control
Recursively enumerable sets and degrees
Recursively enumerable sets and degrees
On the role of procrastination in machine learning
Information and Computation
An introduction to computational learning theory
An introduction to computational learning theory
Complexity issues for vacillatory function identification
Information and Computation
On the structure of degrees of inferability
Journal of Computer and System Sciences
Inductive inference of total functions
Computability, enumerability, unsolvability
Generalized notions of mind change complexity
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Extensional set learning (extended abstract)
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Ordinal mind change complexity of language identification
Theoretical Computer Science
The Power of Pluralism for Automatic Program Synthesis
Journal of the ACM (JACM)
Inductive inference with procrastination: back to definitions
Fundamenta Informaticae
Counting Extensional Differences in BC-Learning
ICGI '00 Proceedings of the 5th International Colloquium on Grammatical Inference: Algorithms and Applications
Generalized notions of mind change complexity
Information and Computation
Hi-index | 0.00 |
Let BC be the model of behaviourally correct function learning as introduced by Bärzdins [Theory of Algorithms and Programs, vol. 1, Latvian State University, 1974, p. 82-88] and Case and Smith [Theoret. Comput. Sci. 25 (1983) 193-220]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BCn to models from the literature and discuss confidence, team learning, and finitely defective hypotheses. Among other things, we prove that there is a trade-off between the number of semantic mind changes and the number of anomalies in the hypotheses. We also discuss consequences for language learning. In particular we show that, in contrast to the case of function learning, the family of classes that are confidently BC-learnable from text is not closed under finite unions.