Randomized Finite-State Distributed Algorithms as Markov Chains

  • Authors:
  • Marie Duflot;Laurent Fribourg;Claudine Picaronny

  • Affiliations:
  • -;-;-

  • Venue:
  • DISC '01 Proceedings of the 15th International Conference on Distributed Computing
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Distributed randomized algorithms, when they operate under a memoryless scheduler, behave as finite Markov chains: the probability at n-th step to go from a configuration x to another one y is a constant p that depends on x and y only. By Markov theory, we thus know that, no matter where the algorithm starts, the probability for the algorithm to be after n steps in a "recurrent" configuration tends to 1 as n tends to infinity. In terms of self-stabilization theory, this means that the set Rec of recurrent configurations is included into the set L of "legitimate" configurations. However in the literature, the convergence of self-stabilizing randomized algorithms is always proved in an elementary way, without explicitly resorting to results of Markov theory. This yields proofs longer and sometimes less formal than they could be. One of our goals in this paper is to explain convergence results of randomized distributed algorithms in terms of Markov chains theory.Our method relies on the existence of a non-increasing measure 驴 over the configurations of the distributed system. Classically, this measure counts the number of tokens of configurations. It also exploits a function D that expresses some distance between tokens, for a fixed number k of tokens. Our first result is to exhibit a sufficient condition Prop on 驴 and D which guarantees that, for memoryless schedulers, every recurrent configuration is legitimate. We extend this property Prop in order to handle arbitrary schedulers although they may induce non Markov chain behaviours. We then explain how Markov's notion of "lumping" naturally applies to measure D, and allows us to analyze the expected time of convergence of self-stabilizing algorithms. The method is illustrated on several examples of mutual exclusion algorithms (Herman, Israeli-Jalfon, Kakugawa-Yamashita).