SPAA '89 Proceedings of the first annual ACM symposium on Parallel algorithms and architectures
Asynchronous shared memory parallel computation
SPAA '90 Proceedings of the second annual ACM symposium on Parallel algorithms and architectures
The expected advantage of asynchrony
SPAA '90 Proceedings of the second annual ACM symposium on Parallel algorithms and architectures
Efficient robust parallel computations
STOC '90 Proceedings of the twenty-second annual ACM symposium on Theory of computing
Combining tentative and definite executions for very fast dependable parallel computing
STOC '91 Proceedings of the twenty-third annual ACM symposium on Theory of computing
STOC '92 Proceedings of the twenty-fourth annual ACM symposium on Theory of computing
Clock construction in fully asynchronous parallel systems and PRAM simulation
Theoretical Computer Science - Special issue on dependable parallel computing
Efficient execution of nondeterministic parallel programs on asynchronous systems
Proceedings of the eighth annual ACM symposium on Parallel algorithms and architectures
Randomized Distributed Edge Coloring via an Extension of the Chernoff--Hoeffding Bounds
SIAM Journal on Computing
Efficient execution of nondeterministic parallel programs on asynchronous systems
Information and Computation
The implementation of the Cilk-5 multithreaded language
PLDI '98 Proceedings of the ACM SIGPLAN 1998 conference on Programming language design and implementation
SODA '97 Proceedings of the eighth annual ACM-SIAM symposium on Discrete algorithms
The Parallel Evaluation of General Arithmetic Expressions
Journal of the ACM (JACM)
Scheduling Cilk multithreaded parallel programs on processors of different speeds
Proceedings of the twelfth annual ACM symposium on Parallel algorithms and architectures
SETI@home: an experiment in public-resource computing
Communications of the ACM
HCW '99 Proceedings of the Eighth Heterogeneous Computing Workshop
CALYPSO: a novel software system for fault-tolerant parallel processing on distributed platforms
HPDC '95 Proceedings of the 4th IEEE International Symposium on High Performance Distributed Computing
Distributions on Level-Sets with Applications to Approximation Algorithms
FOCS '01 Proceedings of the 42nd IEEE symposium on Foundations of Computer Science
Advanced eager scheduling for Java-based adaptive parallel computing: Research Articles
Concurrency and Computation: Practice & Experience - 2002 ACM Java Grande–ISCOPE Conference Part II
Parallel scheduling of complex dags under uncertainty
Proceedings of the seventeenth annual ACM symposium on Parallelism in algorithms and architectures
Efficient parallel algorithms can be made robust
Distributed Computing
Extending IC-scheduling via the Sweep Algorithm
Journal of Parallel and Distributed Computing
Computers & Mathematics with Applications
Improving performance of adaptive component-based dataflow middleware
Parallel Computing
On scheduling dag s for volatile computing platforms: Area-maximizing schedules
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
This paper addresses the problem of scheduling a DAG of unit-length tasks on asynchronous processors, that is, processors having different and changing speeds. The objective is to minimize the makespan, that is, the time to execute the entire DAG. Asynchrony is modeled by an oblivious adversary, which is assumed to determine the processor speeds at each point in time. The oblivious adversary may change processor speeds arbitrarily and arbitrarily often, but makes speed decisions independently of any random choices of the scheduling algorithm. This paper gives bounds on the makespan of two randomized online firing-squad scheduling algorithms, All and Level. These two schedulers are shown to have good makespan even when asynchrony is arbitrarily extreme. Let W and D denote, respectively, the number of tasks and the longest path in the DAG, and let πave denote the average speed of the p processors during the execution. In All each processor repeatedly chooses a random task to execute from among all ready tasks (tasks whose predecessors have been executed). Scheduler All is shown to have a makespan Tp= Θ(Wpπave), when WD ≥ p log p Θ((log p)α Wpπave + (log p) 1-α Dπave), when WD= p(log p)1-2α, for α ∈ [0, 1] Θ (Dπave, when WD ≤ plog p, both expected and with high probability. A family of DAGs is exhibited for which this analysis is tight. In Level each of the processors repeatedly chooses a random task to execute from among all critical tasks (ready tasks at the lowest level of the DAG). This second scheduler is shown to have a makespan of.