Parallelism Increases Iterative Learning Power

  • Authors:
  • John Case;Samuel E. Moelius, Iii

  • Affiliations:
  • Department of Computer & Information Sciences, University of Delaware, 103 Smith Hall, Newark, DE 19716,;Department of Computer & Information Sciences, University of Delaware, 103 Smith Hall, Newark, DE 19716,

  • Venue:
  • ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Iterative learning($\textbf{It}$-learning) is a Gold-style learning model in which each of a learner's output conjectures may depend onlyupon the learner's currentconjecture and the currentinput element. Two extensions of the $\textbf{It}$-learning model are considered, each of which involves parallelism. The first is to run, in parallel, distinct instantiations of a single learner on each input element. The second is to run, in parallel, nindividual learners incorporating the first extension, and to allow the nlearners to communicate their results. In most contexts, parallelism is only a means of improving efficiency. However, as shown herein, learners incorporating the first extension are more powerful than $\textbf{It}$-learners, and, collectivelearners resulting from the second extension increase in learning power as nincreases. Attention is paid to how one would actually implement a learner incorporating each extension. Parallelism is the underlying mechanism employed.