Applications of combinatorial designs in computer science
ACM Computing Surveys (CSUR)
On the value of information in distributed decision-making (extended abstract)
PODC '91 Proceedings of the tenth annual ACM symposium on Principles of distributed computing
STOC '92 Proceedings of the twenty-fourth annual ACM symposium on Theory of computing
Work-optimal asynchronous algorithms for shared memory parallel computers
SIAM Journal on Computing
Time-optimal message-efficient work performance in the presence of faults
PODC '94 Proceedings of the thirteenth annual ACM symposium on Principles of distributed computing
Performing Work Efficiently in the Presence of Faults
SIAM Journal on Computing
Distributed cooperation in the absence of communication (brief announcement)
Proceedings of the nineteenth annual ACM symposium on Principles of distributed computing
Fault-Tolerant Parallel Computation
Fault-Tolerant Parallel Computation
Performing Tasks on Restartable Message-Passing Processors
WDAG '97 Proceedings of the 11th International Workshop on Distributed Algorithms
Optimal, Distributed Decision-Making: The Case of No Communication
FCT '99 Proceedings of the 12th International Symposium on Fundamentals of Computation Theory
Clock construction in fully asynchronous parallel systems and PRAM simulation
SFCS '92 Proceedings of the 33rd Annual Symposium on Foundations of Computer Science
Riemann's hypothesis and tests for primality
Journal of Computer and System Sciences
Optimally work-competitive scheduling for cooperative computing with merging groups
Proceedings of the twenty-first annual symposium on Principles of distributed computing
The Complexity of Synchronous Iterative Do-All with Crashes
DISC '01 Proceedings of the 15th International Conference on Distributed Computing
distributed cooperation and adversity: complexity trade-offs
PCK50 Proceedings of the Paris C. Kanellakis memorial workshop on Principles of computing & knowledge: Paris C. Kanellakis memorial workshop on the occasion of his 50th birthday
Work-competitive scheduling for cooperative computing with dynamic groups
Proceedings of the thirty-fifth annual ACM symposium on Theory of computing
Group Membership and Wide-Area Master-Worker Computations
ICDCS '03 Proceedings of the 23rd International Conference on Distributed Computing Systems
Cooperative computing with fragmentable and mergeable groups
Journal of Discrete Algorithms
The complexity of synchronous iterative Do-All with crashes
Distributed Computing
Efficient gossip and robust distributed computation
Theoretical Computer Science
Latin squares with bounded size of row prefix intersections
Discrete Applied Mathematics
The Do-All problem with Byzantine processor failures
Theoretical Computer Science - Foundations of software science and computation structures
Dynamic load balancing with group communication
Theoretical Computer Science
Note: Latin squares with bounded size of row prefix intersections
Discrete Applied Mathematics
Internet computing of tasks with dependencies using unreliable workers
OPODIS'04 Proceedings of the 8th international conference on Principles of Distributed Systems
Hi-index | 0.00 |
This paper presents a study of a distributed cooperation problem under the assumption that processors may not be able to communicate for a prolonged time. The problem for n processors is defined in terms of t tasks that need to be performed efficiently and that are known to all processors. The results of this study characterize the ability of the processors to schedule their work so that when some processors establish communication, the wasted (redundant) work these processors have collectively performed prior to that time is controlled. The lower bound for wasted work presented here shows that for any set of schedules there are two processors such that when they complete t1 and t2 tasks respectively the number of redundant tasks is Omega;(t1t2/t). For n = t and for schedules longer than 驴n the number of redundant tasks for two or more processors must be at least 2. The upper bound on pairwise waste for schedules of length 驴n is shown to be 1. Our efficient deterministic schedule construction is motivated by design theory. To obtain linear length schedules, a novel deterministic and efficient construction is given. This construction has the property that pairwise wasted work increases gracefully as processors progress through their schedules. Finally our analysis of a random scheduling solution shows that with high probability pairwise waste is well behaved at all times: specifically, two processors having completed t1 and t2 tasks, respectively, are guaranteed to have no more than t1t2/t +驴 redundant tasks, where 驴= O(log n+ 驴t1t2/t驴logn.