Amortized efficiency of list update and paging rules
Communications of the ACM
On the value of information in distributed decision-making (extended abstract)
PODC '91 Proceedings of the tenth annual ACM symposium on Principles of distributed computing
Journal of Algorithms
Optimal time randomized consensus—making resilient algorithms fast in practice
SODA '91 Proceedings of the second annual ACM-SIAM symposium on Discrete algorithms
Competitive algorithms for distributed data management (extended abstract)
STOC '92 Proceedings of the twenty-fourth annual ACM symposium on Theory of computing
Competitive distributed job scheduling (extended abstract)
STOC '92 Proceedings of the twenty-fourth annual ACM symposium on Theory of computing
Performing work efficiently in the presence of faults
PODC '92 Proceedings of the eleventh annual ACM symposium on Principles of distributed computing
Time-optimal message-efficient work performance in the presence of faults
PODC '94 Proceedings of the thirteenth annual ACM symposium on Principles of distributed computing
Communications of the ACM
Fault-Tolerant Parallel Computation
Fault-Tolerant Parallel Computation
Optimally work-competitive scheduling for cooperative computing with merging groups
Proceedings of the twenty-first annual symposium on Principles of distributed computing
The Bancomat Problem: An Example of Resource Allocation in a Partitionable Asynchronous System
DISC '98 Proceedings of the 12th International Symposium on Distributed Computing
Distributed Cooperation During the Absence of Communication
DISC '00 Proceedings of the 14th International Conference on Distributed Computing
System Support for Partition-Aware Network Applications
ICDCS '98 Proceedings of the The 18th International Conference on Distributed Computing Systems
Cooperative computing with fragmentable and mergeable groups
Journal of Discrete Algorithms
Performing tasks on synchronous restartable message-passing processors
Distributed Computing
Collective asynchronous reading with polylogarithmic worst-case overhead
STOC '04 Proceedings of the thirty-sixth annual ACM symposium on Theory of computing
The Do-All problem with Byzantine processor failures
Theoretical Computer Science - Foundations of software science and computation structures
Hi-index | 0.00 |
The problem of cooperatively performing a set of t tasks in a decentralized setting where the computing medium is subject to failures is one of the fundamental problems in distributed computing. The setting with partitionable networks is especially challenging, as algorithmic solutions must accommodate the possibility that groups of processors become disconnected (and, perhaps, reconnected) during the computation. The efficiency of task-performing algorithms is often assessed in terms of their work: the total number of tasks, counting multiplicities, performed by all of the processors during the computation. In general, an adversary that is able to partition the network into g components can cause any task-performing algorithm to have work Ω(t•g) even if each group of processors performs no more than the optimal number of Θ(t) tasks.Given such pessimistic lower bounds, and in order to understand better the practical implications of performing work in partitionable settings, we study distributed work-scheduling andpursue a competitiveanalysis. Specifically, we study asimple randomized scheduling algorithm for p asynchronous processors, connected by a dynamically changing communication medium, to complete t known tasks. We compare the performance of the algorithm against that of an "off-line" algorithm with full knowledge of the future changes in the communication medium. We describe a notion of computation width, which associates a natural number with a history of changes in the communication medium, and show both upper and lower bounds on competitiveness in terms of this quantity. Specifically, we show that a simple randomized algorithm obtains the competitive ratio (1+cw/e), where cw is computation width; we then show that this ratio is tight.