Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Saving the phenomena: requirements that inductive inference machines not contradict known data
Information and Computation
Machine models and simulations
Handbook of theoretical computer science (vol. A)
Subrecursive programming systems: complexity & succinctness
Subrecursive programming systems: complexity & succinctness
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
The art of computer programming, volume 3: (2nd ed.) sorting and searching
The art of computer programming, volume 3: (2nd ed.) sorting and searching
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
Inductive Inference, DFAs, and Computational Complexity
AII '89 Proceedings of the International Workshop on Analogical and Inductive Inference
Polynomial time and space shift-reduce parsing of arbitrary context-free grammars
ACL '91 Proceedings of the 29th annual meeting on Association for Computational Linguistics
Polynomial-Time algorithms for learning typed pattern languages
LATA'12 Proceedings of the 6th international conference on Language and Automata Theory and Applications
Learning in the limit with lattice-structured hypothesis spaces
Theoretical Computer Science
Hi-index | 0.00 |
There are difficulties obtaining fair feasibility from polynomial time updated language learning in the limit from positive data. Pitt 1989 noted that unfair delaying tricks can achieve polynomial time updates but with no feasibility constraint on the whole learning process. In this context Yoshinaka 2009 makes a useful list of properties or restrictions towards true feasibility. He also provides interesting examples of fair polynomial time algorithms featuring particular uniformly polynomial time decidable hypothesis spaces, and each of his algorithms satisfies several of his properties. Yoshinaka claims that the combination of the three restrictions on polynomial time learners of consistency (which we call herein postdictive completeness), conservativeness and prudence is restrictive enough to stop Pitt's delaying tricks from working. The present paper refutes the claim of the previous paragraph in three settings. In the setting of uniformly polynomial time decidable hypothesis spaces with a few effective closure properties, the three restrictions allow maximal unfairness. The other two settings involve certain other uniformly decidable hypothesis spaces and general language learning hypothesis spaces. In each of these settings, the three restrictions forbid some, but not all Pitt-style delaying tricks. Inside the proofs of each of our theorems asserting that the three restrictions do not forbid some or all delaying tricks, the witnessing learners can be seen to explicitly employ delaying tricks.