Recursively enumerable sets and degrees
Recursively enumerable sets and degrees
Probability and plurality for aggregations of learning machines
Information and Computation
Inductive inference with bounded number of mind changes
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Relations between probabilistic and team one-shot learners (extended abstract)
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Breaking the probability ½ barrier in FIN-type learning
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
The Power of Pluralism for Automatic Program Synthesis
Journal of the ACM (JACM)
Probabilistic inductive inference: a survey
Theoretical Computer Science
Probabilistic and team PFIN-type learning: General properties
Journal of Computer and System Sciences
Hi-index | 5.23 |
A FIN-learning machine M receives successive values of the function f it is learning and at some moment outputs a conjecture which should be a correct index of f. FIN learning has two extensions: (1) If M flips fair coins and learns a function with certain probability p, we have FIN (p)-learning. (2) When n machines simultaneously try to learn the same function f and at least k of these machines output correct indices of f, we have learning by a [k,n] FIN team. Sometimes a team or a probabilistic learner can simulate another one, if their probabilities p1, p2 (or team success ratios k1/n1,k2/n2) are close enough (Daley et al., in: Valiant, Waranth (Eds.), Proc. 5th Annual Workshop on Computational Learning Theory, ACM Press, New york, 1992, pp. 203-217; Daley and Kalyanasundaram, Available from http://www.cs.pitt.edu/~daley/fin/fin.html, 1996). On the other hand, there are cut-points r which make simulation of FIN (p2) by FIN (p1) impossible whenever p2