Amortized efficiency of list update and paging rules
Communications of the ACM
Competitive distributed job scheduling (extended abstract)
STOC '92 Proceedings of the twenty-fourth annual ACM symposium on Theory of computing
Dynamic Scheduling of Hard Real-Time Tasks and Real-Time Threads
IEEE Transactions on Software Engineering
On-Line Scheduling of Real-Time Tasks
IEEE Transactions on Computers
Algorithms for the Certified Write-All Problem
SIAM Journal on Computing
SETI@HOME—massively distributed computing for SETI
Computing in Science and Engineering
Fault-Tolerant Parallel Computation
Fault-Tolerant Parallel Computation
A scheduling model for reduced CPU energy
FOCS '95 Proceedings of the 36th Annual Symposium on Foundations of Computer Science
Performing tasks on synchronous restartable message-passing processors
Distributed Computing
Do-All Computing in Distributed Systems
Do-All Computing in Distributed Systems
IEEE Transactions on Parallel and Distributed Systems
A theory of competitive analysis for distributed algorithms
SFCS '94 Proceedings of the 35th Annual Symposium on Foundations of Computer Science
Speed scaling with an arbitrary power function
SODA '09 Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms
Speed scaling of processes with arbitrary speedup curves on a multiprocessor
Proceedings of the twenty-first annual symposium on Parallelism in algorithms and architectures
The bell is ringing in speed-scaled multiprocessor scheduling
Proceedings of the twenty-first annual symposium on Parallelism in algorithms and architectures
Online set packing and competitive scheduling of multi-part tasks
Proceedings of the 29th ACM SIGACT-SIGOPS symposium on Principles of distributed computing
On multi-processor speed scaling with migration: extended abstract
Proceedings of the twenty-third annual ACM symposium on Parallelism in algorithms and architectures
The K computer: Japanese next-generation supercomputer development project
Proceedings of the 17th IEEE/ACM international symposium on Low-power electronics and design
Meeting deadlines: how much speed suffices?
ICALP'11 Proceedings of the 38th international colloquim conference on Automata, languages and programming - Volume Part I
Performing dynamically injected tasks on processes prone to crashes and restarts
DISC'11 Proceedings of the 25th international conference on Distributed computing
Race to idle: new algorithms for speed scaling with a sleep state
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
How to Allocate Tasks Asynchronously
FOCS '12 Proceedings of the 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science
Hi-index | 0.00 |
Consider a system in which tasks of different execution times arrive continuously and have to be executed by a set of processors that are prone to crashes and restarts. In this paper we model and study the impact of parallelism and failures on the competitiveness of such an online system. In a fault-free environment, a simple Longest-in-System scheduling policy, enhanced by a redundancy-avoidance mechanism, guarantees optimality in a long-term execution. In the presence of failures though, scheduling becomes a much more challenging task. In particular, no parallel deterministic algorithm can be competitive against an offline optimal solution, even with one single processor and tasks of only two different execution times. We find that when additional energy is provided to the system in the form of processor speedup, the situation changes. Specifically, we identify thresholds on the speedup under which such competitiveness cannot be achieved by any deterministic algorithm, and above which competitive algorithms exist. Finally, we propose algorithms that achieve small bounded competitive ratios when the speedup is over the threshold.