Journal of Algorithms
Inductive Inference: Theory and Methods
ACM Computing Surveys (CSUR)
On learning in the limit and non-uniform (&egr;,&dgr;)-learning
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Search for the maximum of a random walk
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Property testing and its connection to learning and approximation
Journal of the ACM (JACM)
Classification using information
Annals of Mathematics and Artificial Intelligence
ALT '98 Proceedings of the 9th International Conference on Algorithmic Learning Theory
Learning Recursive Functions Refutably
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
On learning of functions refutably
Theoretical Computer Science - Selected papers in honour of Setsuo Arikawa
Absolute versus probabilistic classification in a logical setting
Theoretical Computer Science
Property Testing: A Learning Theory Perspective
Foundations and Trends® in Machine Learning
Absolute versus probabilistic classification in a logical setting
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
Consider the following type of problem: There is an unknown function, f : Rn → Rm, there is also a black-box that on query x (&egr; Rn) returns f(x). Is there an algorithm that, using probes to the black-box, can figure out analytic information about f? (For an example: “Is f a polynomial?”, “Is f a second order differentiable at x = (0,0,…,0)?” etc.).Clearly, for examples as these, if we bound the number of probes an algorithm has to settle for, no algorithm can carry the task. On the other hand, if one allows an infinite iteration of a “probe compute and guess” process, then, (quite surprisingly) for many such questions, there are algorithms that are guaranteed to be correct in all but finitely many of their guesses. We call such questions Decidable In the Limit, (DIL).We analyze the class of DIL problems and provide a necessary and sufficient condition for the membership of a decision problem in this class. We offer an algorithm for any DIL problem, and apply it to several types of learning tasks.Furthermore, if an a-priori probability distribution P, according to which f is being chosen, is available to the algorithm, then it can be strengthened into a finite algorithm. More precisely, for many distributions P, there exists a polynomial function, l, such that for every 0l(log(&dgr;)) many probes that succeeds on more than (1–&dgr;) of the f's (as measured by P).We believe that the new approach presented here will be found useful for many further applications.