Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Formal languages
Computational limitations on learning from examples
Journal of the ACM (JACM)
Inductive inference from positive data is powerful
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Types of monotonic language learning and their characterization
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Language learning in dependence on the space of hypotheses
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
An Introduction to the General Theory of Algorithms
An Introduction to the General Theory of Algorithms
Language Learning without Overgeneralization
STACS '92 Proceedings of the 9th Annual Symposium on Theoretical Aspects of Computer Science
Language Learning with a Bounded Number of Mind Changes
STACS '93 Proceedings of the 10th Annual Symposium on Theoretical Aspects of Computer Science
Inductive Inference with Bounded Mind Changes
ALT '92 Proceedings of the Third Workshop on Algorithmic Learning Theory
Monotonic Versus Nonmonotonic Language Learning
Proceedings of the Second International Workshop on Nonmonotonic and Inductive Logic
Necessary and sufficient conditions for learning with correction queries
Theoretical Computer Science
Hi-index | 0.00 |
In the present paper we study the learnability of the enumerable families L of uniformly recursive languages in dependence on the number of allowed mind changes, i.e., with respect to a well-studied measure of efficiency.We distinguish between exact learnability ( L has to be learnt w.r.t. the hypothesis space L itself), class preserving learning ( L has to be inferred w.r.t. some hypothesis space G having the same range as L ), and class comprising inference ( L has to be inferred w.r.t. some hypothesis space G that has a range including range ( L )) as well as between learning from positive and negative examples.The measure of efficiency is applied to prove the superiority of class comprising learning algorithms over class preserving learning which itself turns out to be superior to exact learning algorithms. In particular, we considerably improve results obtained previously and show that a suitable choice of the hypothesis space may result in a considerable speed up of learning algorithms, even if instead of positive and negative data only positive examples will be presented. Furthermore, we completely separate all modes of learning with a bounded number of mind changes from class preserving learning that avoids overgeneralization.