Probabilistic self-stabilization
Information Processing Letters
Token management schemes and random walks yield self-stabilizing mutual exclusion
PODC '90 Proceedings of the ninth annual ACM symposium on Principles of distributed computing
Uniform self-stabilizing ring orientation
Information and Computation
Probabilistic self-stabilizing mutual exclusion in uniform rings
PODC '94 Proceedings of the thirteenth annual ACM symposium on Principles of distributed computing
Uniform and Self-Stabilizing Token Rings Allowing Unfair Daemon
IEEE Transactions on Parallel and Distributed Systems
Memory space requirements for self-stabilizing leader election protocols
Proceedings of the eighteenth annual ACM symposium on Principles of distributed computing
Self-stabilization
Self-stabilizing systems in spite of distributed control
Communications of the ACM
Two-State Self-Stabilizing Algorithms for Token Rings
IEEE Transactions on Software Engineering
Analyzing Expected Time by Scheduler-Luck Games
IEEE Transactions on Software Engineering
Randomized dining philosophers without fairness assumption
Distributed Computing
Coupling and self-stabilization
Distributed Computing - Special issue: DISC 04
An elementary proof that Herman's ring is Θ(N2)
Information Processing Letters
Game-Based Probabilistic Predicate Abstraction in PRISM
Electronic Notes in Theoretical Computer Science (ENTCS)
Electronic Notes in Theoretical Computer Science (ENTCS)
An elementary proof that Herman's Ring is Θ(N2)
Information Processing Letters
All k-bounded policies are equivalent for self-stabilization
SSS'06 Proceedings of the 8th international conference on Stabilization, safety, and security of distributed systems
Algorithms and theory of computation handbook
Hi-index | 0.00 |
Distributed randomized algorithms, when they operate under a memoryless scheduler, behave as finite Markov chains: the probability at n-th step to go from a configuration x to another one y is a constant p that depends on x and y only. By Markov theory, we thus know that, no matter where the algorithm starts, the probability for the algorithm to be after n steps in a "recurrent" configuration tends to 1 as n tends to infinity. In terms of self-stabilization theory, this means that the set Rec of recurrent configurations is included into the set L of "legitimate" configurations. However in the literature, the convergence of self-stabilizing randomized algorithms is always proved in an elementary way, without explicitly resorting to results of Markov theory. This yields proofs longer and sometimes less formal than they could be. One of our goals in this paper is to explain convergence results of randomized distributed algorithms in terms of Markov chains theory.Our method relies on the existence of a non-increasing measure 驴 over the configurations of the distributed system. Classically, this measure counts the number of tokens of configurations. It also exploits a function D that expresses some distance between tokens, for a fixed number k of tokens. Our first result is to exhibit a sufficient condition Prop on 驴 and D which guarantees that, for memoryless schedulers, every recurrent configuration is legitimate. We extend this property Prop in order to handle arbitrary schedulers although they may induce non Markov chain behaviours. We then explain how Markov's notion of "lumping" naturally applies to measure D, and allows us to analyze the expected time of convergence of self-stabilizing algorithms. The method is illustrated on several examples of mutual exclusion algorithms (Herman, Israeli-Jalfon, Kakugawa-Yamashita).