Avoiding Coding Tricks by Hyperrobust Learning
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
Synthesizing Learners Tolerating Computable Noisy Data
ALT '98 Proceedings of the 9th International Conference on Algorithmic Learning Theory
On the Uniform Learnability of Approximations to Non-Recursive Functions
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
Robust Learning - Rich and Poor
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Learning recursive functions: A survey
Theoretical Computer Science
Robust learning of automatic classes of languages
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Robust learning of automatic classes of languages
Journal of Computer and System Sciences
Hi-index | 0.00 |
Results in recursion-theoretic inductive inference have been criticized as depending on unrealistic self-referential examples. J.M. Barzdin (1974) proposed a way of ruling out such examples and conjectured that one of the earliest results of inductive inference theory would fall if his method were used. The author refutes Barzdin's conjecture and proposes a new line of research examining robust separations which are defined using a strengthening of Barzdin's original idea. Preliminary results are presented, and the most important open problem is stated as a conjecture. The extension of this work from function learning to formal language learning is discussed.