Randomization Helps to Perform Tasks on Processors Prone to Failures

  • Authors:
  • Bogdan S. Chlebus;Dariusz R. Kowalski

  • Affiliations:
  • -;-

  • Venue:
  • Proceedings of the 13th International Symposium on Distributed Computing
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

The problem of performing t tasks in a distributed system of p processors is studied. The tasks are assumed to be independent, similar (each takes one stepto be completed), and idempotent (can be performed many times and concurrently). The processors communicate by passing messages and each of them may fail. This problem is usually called do-all, it was introduced by Dwork, Halpern and Waarts. The distributed setting considered in this paper is as follows: The system is synchronous, the processors fail by stopping, reliable multicast is available. The occurrence of faults is modeled by an adversary who has to choose at least c ċ p processors prior to the start of the computation, for a fixed constant 0 c The main result is showing that there is a sharpdi fference between the expected performance of randomized algorithms versus the worst-case deterministic performance of algorithms solving the DO-ALL problem in such a setting. Performance is measured in terms of work and communication of algorithms. Work is the total number of steps performed by all the processors while they are operational, including idling. Communication is the total number of point-to-point messages exchanged. Let effort be the sum of work and communication. A randomized algorithm is developed which has the expected effort O(t + p (1 + log* p - log*(p/t))), where log* is the number of iterations of the log function required to go with the value of function down to 1. For deterministic algorithms and their worst-case behavior, a lower bound Ω(t+p log t/ log log t) on work holds, and it is matched by the work performed by a simple algorithm.