Scheduling Multiprocessor Tasks to Minimize Schedule Length
IEEE Transactions on Computers
Condor: a distributed job scheduler
Beowulf cluster computing with Linux
Dynamic Window-Constrained Scheduling of Real-Time Streams in Media Servers
IEEE Transactions on Computers
Exploiting coarse-grained task, data, and pipeline parallelism in stream programs
Proceedings of the 12th international conference on Architectural support for programming languages and operating systems
Efficient task replication and management for adaptive fault tolerance in mobile Grid environments
Future Generation Computer Systems - Special section: Information engineering and enterprise architecture in distributed computing environments
Daedalus: toward composable multimedia MP-SoC design
Proceedings of the 45th annual Design Automation Conference
Towards a general model of the multi-criteria workflow scheduling on the grid
Future Generation Computer Systems
Dynamic Window-Constrained Scheduling for Multimedia Applications
ICMCS '99 Proceedings of the 1999 IEEE International Conference on Multimedia Computing and Systems - Volume 02
Elastic scaling of data parallel operators in stream processing
IPDPS '09 Proceedings of the 2009 IEEE International Symposium on Parallel&Distributed Processing
Throughput Optimization for Micro-factories Subject to Failures
ISPDC '09 Proceedings of the 2009 Eighth International Symposium on Parallel and Distributed Computing
Communications of the ACM
Analysis of a window-constrained scheduler for real-time and best-effort packet streams
RTSS'10 Proceedings of the 21st IEEE conference on Real-time systems symposium
Exploiting throughput for pipeline execution in streaming image processing applications
Euro-Par'06 Proceedings of the 12th international conference on Parallel Processing
Parallel pipelined filter ordering with precedence constraints
ACM Transactions on Algorithms (TALG)
Hi-index | 0.00 |
In this paper, we study the problem of optimizing the throughput of streaming applications for heterogeneous platforms subject to failures. Applications are linear graphs of tasks (pipelines), with a type associated to each task. The challenge is to map each task onto one machine of a target platform, each machine having to be specialized to process only one task type, given that every machine is able to process all the types before being specialized in order to avoid costly setups. The objective is to maximize the throughput, i.e., the rate at which jobs can be processed when accounting for failures. Each instance can thus be performed by any machine specialized in its type and the workload of the system can be shared among a set of specialized machines. For identical machines, we prove that an optimal solution can be computed in polynomial time. However the problem becomes NP-hard when two machines may compute the same task type at different speeds. Several polynomial time heuristics are designed for the most realistic specialized settings. Simulation results assess their efficiency, showing that the best heuristics obtain a good throughput, much better than the throughput obtained with a random mapping. Moreover, the throughput is close to the optimal solution in the particular cases where the optimal throughput can be computed.