Hierarchies of probabilistic and team FIN -learning

  • Authors:
  • Andris Ambainis;Kalvis Aps&imarc/tis;R&umarc/siņ/s Ereivalds;Carl H. Smith

  • Affiliations:
  • Univ. of California, Berkeley;Univ. of Latvia, R&imarc/ga, Latvia;Univ. of Latvia, R&imarc/ga, Latvia;Univ. of Maryland, College Park

  • Venue:
  • Theoretical Computer Science
  • Year:
  • 2001

Quantified Score

Hi-index 5.23

Visualization

Abstract

A FIN-learning machine M receives successive values of the function f it is learning and at some moment outputs a conjecture which should be a correct index of f. FIN learning has two extensions: (1) If M flips fair coins and learns a function with certain probability p, we have FIN (p)-learning. (2) When n machines simultaneously try to learn the same function f and at least k of these machines output correct indices of f, we have learning by a [k,n] FIN team. Sometimes a team or a probabilistic learner can simulate another one, if their probabilities p1, p2 (or team success ratios k1/n1,k2/n2) are close enough (Daley et al., in: Valiant, Waranth (Eds.), Proc. 5th Annual Workshop on Computational Learning Theory, ACM Press, New york, 1992, pp. 203-217; Daley and Kalyanasundaram, Available from http://www.cs.pitt.edu/~daley/fin/fin.html, 1996). On the other hand, there are cut-points r which make simulation of FIN (p2) by FIN (p1) impossible whenever p2