Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
On the role of procrastination in machine learning
Information and Computation
Subrecursive programming systems: complexity & succinctness
Subrecursive programming systems: complexity & succinctness
An introduction to computational learning theory
An introduction to computational learning theory
A New Characterization of Type-2 Feasibility
SIAM Journal on Computing
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An average-case optimal one-variable pattern language learner
Journal of Computer and System Sciences - Eleventh annual conference on computational learning theory&slash;Twelfth Annual IEEE conference on computational complexity
Median Selection Requires $(2+\epsilon)n$ Comparisons
SIAM Journal on Discrete Mathematics
On characterizations of the basic feasible functionals, Part I
Journal of Functional Programming
Polynomial and abstract subrecursive classes
Journal of Computer and System Sciences
Dynamically Delayed Postdictive Completeness and Consistency in Learning
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Learning with ordinal-bounded memory from positive data
Journal of Computer and System Sciences
Hi-index | 0.00 |
For learning functions in the limit, an algorithmic learner obtains successively more data about a function and calculates trials each resulting in the output of a corresponding program, where, hopefully, these programs eventually converge to a correct program for the function. The authors desired to provide a feasible version of this learning in the limit -- a version where each trial was conducted feasibly andthere was some feasible limit on the number of trials allowed. Employed were basic feasible functionalswhich query an input function as to its values and which provide each trial. An additional tally argument 0iwas provided to the functionals for their execution of the i-th trial. In this way more time resource was available for each successive trial. The mechanism employed to feasibly limit the number of trials was to feasibly count them down from some feasible notation for a constructive ordinal. Since all processes were feasible, their termination was feasibly detectable, and, so, it was possible to wait for the trials to terminate and suppress all the output programs but the last. Hence, although there is still an iteration of trials, the learning was a special case of what has long been known as total Fin-learning, i.e., learning in the limit, where, on each function, the learner always outputs exactly one conjectured program. Our general main results provide for strict learning hierarchies where the trial count down involves all and only notations for infinite limit ordinals. For our hierarchies featuring finitely many limit ordinal jumps, we have upper and lower total run time bounds of our feasible Fin-learners in terms of finite stacks of exponentials. We provide, though, an example of how to regain feasibility by a suitable parameterized complexity analysis.