Synthesizing Learners Tolerating Computable Noisy Data
ALT '98 Proceedings of the 9th International Conference on Algorithmic Learning Theory
Intrinsic Complexity of Learning Geometrical Concepts from Positive Data
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Learning by switching type of information
Information and Computation
Intrinsic complexity of learning geometrical concepts from positive data
Journal of Computer and System Sciences
Variations on U-shaped learning
Information and Computation
The Intrinsic Complexity of Learning: A Survey
Fundamenta Informaticae
Results on memory-limited U-shaped learning
Information and Computation
Information and Computation
Non-U-shaped vacillatory and team learning
Journal of Computer and System Sciences
Learning in Friedberg numberings
Information and Computation
U-shaped, iterative, and iterative-with-counter learning
Machine Learning
Resource Restricted Computability Theoretic Learning: Illustrative Topics and Problems
CiE '07 Proceedings of the 3rd conference on Computability in Europe: Computation and Logic in the Real World
Prescribed Learning of R.E. Classes
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Learning in Friedberg Numberings
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Numberings Optimal for Learning
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Prescribed learning of r.e. classes
Theoretical Computer Science
Hypothesis Spaces for Learning
LATA '09 Proceedings of the 3rd International Conference on Language and Automata Theory and Applications
Numberings optimal for learning
Journal of Computer and System Sciences
U-shaped, iterative, and iterative-with-counter learning
COLT'07 Proceedings of the 20th annual conference on Learning theory
COLT'07 Proceedings of the 20th annual conference on Learning theory
Uncountable automatic classes and learning
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Solutions to open questions for non-u-shaped learning with memory limitations
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Hypothesis spaces for learning
Information and Computation
Uncountable automatic classes and learning
Theoretical Computer Science
Non U-shaped vacillatory and team learning
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Some recent results in u-shaped learning
TAMC'06 Proceedings of the Third international conference on Theory and Applications of Models of Computation
Variations on u-shaped learning
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Learnability of co-r.e. classes
LATA'12 Proceedings of the 6th international conference on Language and Automata Theory and Applications
The Intrinsic Complexity of Learning: A Survey
Fundamenta Informaticae
Learnability of automatic classes
Journal of Computer and System Sciences
Hi-index | 0.00 |
Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are classes of languages that can be learned if convergence in the limit to up to (n + 1) exactly correct grammars is allowed but which cannot be learned if convergence in the limit is to no more than n grammars, where the no more than n grammars can each make finitely many mistakes. This contrasts sharply with results of Barzdin and Podnieks and, later, Case and Smith for learnability from both positive and negative data. A subset principle from a 1980 paper of Angluin is extended to the vacillatory and other criteria of this paper. This principle provides a necessary condition for avoiding overgeneralization in learning from positive data. It is applied to prove another theorem to the effect that one can optimally eliminate half of the mistakes from final programs for vacillatory criteria if one is willing to converge in the limit to infinitely many different programs instead.Child language learning may be sensitive to the order or timing of data presentation. It is shown, though, that for the vacillatory success criteria of this paper, there is no loss of learning power for machines which are insensitive to order in several ways simultaneously. For example, partly set-driven machines attend only to the set and length of sequence of positive data, not the actual sequence itself. A machine M is weakly n-ary order independent ${\stackrel{\rm def}\Leftrightarrow}$ for each language L on which, for some ordering of the positive data about L, M converges in the limit to a finite set of grammars, there is a finite set of grammars D (of cardinality $\leq n$) such that M converges to a subset of this same D for each ordering of the positive data for L. The theorem most difficult to prove in the paper implies that machines which are simultaneously partly set-driven and weakly n-ary order independent do not lose learning power for converging in the limit to up to n grammars. Several variants of this theorem are obtained by modifying its proof, and some of these variants have application in this and other papers. Along the way it is also shown, for the vacillatory criteria, that learning power is not increased if one restricts the sequence of positive data presentation to be computable. Some of these results are nontrivial lifts of prior work for the n=1 case done by the Blums; Wiehagen; Osherson, Stob, and Weinstein; Schäfer; and Fulk.