Optimal static load balancing in distributed computer systems
Journal of the ACM (JACM)
Allocating Independent Subtasks on Parallel Processors
IEEE Transactions on Software Engineering
Optimal partitioning of randomly generated distributed programs
IEEE Transactions on Software Engineering
Analytic Queueing Network Models for Parallel Processing of Task Systems
IEEE Transactions on Computers
Combinatorial optimization: algorithms and complexity
Combinatorial optimization: algorithms and complexity
Algorithmics: theory & practice
Algorithmics: theory & practice
Dynamic Remapping of Parallel Computations with Varying Resource Demands
IEEE Transactions on Computers
From local to global: an analysis of nearest neighbor balancing on hypercube
SIGMETRICS '88 Proceedings of the 1988 ACM SIGMETRICS conference on Measurement and modeling of computer systems
Optimal Partitioning of Random Programs Across Two Processors
IEEE Transactions on Software Engineering
An approximation of the processing time for a random graph model of parallel computation
ACM '86 Proceedings of 1986 ACM Fall joint computer conference
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Evaluation of parallel execution of program tree structures
SIGMETRICS '84 Proceedings of the 1984 ACM SIGMETRICS conference on Measurement and modeling of computer systems
Comparing Partition and Random Testing via Majorization and Schur Functions
IEEE Transactions on Software Engineering
Hi-index | 14.98 |
We consider the problem of statically assigning many tasks to a (smaller) system of homogeneous processors, where a task's structure is modeled as a branching process, all tasks are assumed to have identical behavior, and the tasks may synchronize frequently. We show how the theory of majorization can be used to obtain a partial order among possible task assignments. We show that if the vector of numbers of tasks assigned to each processor under one mapping is majorized by that of another mapping, then the former mapping is better than the latter with respect to a large number of objective functions. In particular, we show how the metrics of finishing time, the space-time product, and reliability are all captured. We also apply majorization to the problem of partitioning a pool of processors for distribution among parallelizable tasks. Limitations of the approach, which include the static nature of the assignment, are also discussed.