Maestro: a self-organizing peer-to-peer dataflow framework using reinforcement learning

  • Authors:
  • C. van Reeuwijk

  • Affiliations:
  • Vrije Universiteit Amsterdam, Amsterdam, Netherlands

  • Venue:
  • Proceedings of the 18th ACM international symposium on High performance distributed computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we describe Maestro, a dataflow computation framework for Ibis, our Java-based grid middleware. The novelty of Maestro is that it is a self-organizing peer-to-peer system, meaning that it distributes the tasks in a flow over the available nodes based on local decisions on each node, without any central coordination. As a result, the computations are more scalable, more resilient against failing nodes, and less sensitive to communication latencies. Maestro uses a task distribution approach based on reinforcement learning, a learning mechanism where the positive outcome of a choice makes it more likely that the same choice repeated in the future. Maestro selects the most efficient node for each stage in the computation based on the observed computation and communication times. To ensure agility, the selection decisions are made as late as possible without letting the nodes fall idle. Using this task distribution algorithm, the nodes can be used efficiently, even in a heterogeneous system with failure-prone nodes communicating through high-latency connections.