Analysis of scalability of parallel algorithms and architectures: a survey
ICS '91 Proceedings of the 5th international conference on Supercomputing
Introduction to parallel computing: design and analysis of algorithms
Introduction to parallel computing: design and analysis of algorithms
Scalability of Parallel Algorithm-Machine Combinations
IEEE Transactions on Parallel and Distributed Systems
Performance Considerations of Shared Virtual Memory Machines
IEEE Transactions on Parallel and Distributed Systems
A study of average-case speedup and scalability of parallel computations on static networks
Mathematical and Computer Modelling: An International Journal
Hi-index | 0.00 |
We investigate the average-case scalability of parallel algorithms executing on multicomputer systems whose static networks are k-ary d-cubes. Our performance metrics are isoefficiency function and isospeed scalability. For the purpose of average-case performance analysis, we formally define the concepts of averagecase isoefficiency function and average-case isospeed scalability. By modeling parallel algorithms on multicomputers using task interaction graphs, we are mainly interested in the effects of communication overhead and load imbalance on the performance of parallel computations. We focus on the topology of static networks whose limited connectivities are constraints to high performance. In our probabilistic model, task computation and communication times are treated as random variables, so that we can analyze the average-case performance of parallel computations. We derive the expected parallel execution time on symmetric static networks and apply the result to k-ary d-cubes. We characterize the maximum tolerable communication overhead such that constant averagecase efficiency and average-case average-speed could be maintained and that the number of tasks has a growth rate 驴(PlogP), where Pis the number of processors. It is found that the scalability of a parallel computation is essentially determined by the topology of a static network, i.e., the architecture of a parallel computer system. We also argue that under our probabilistic model, the number of tasks should grow at least in the rate of 驴(PlogP), so that constant average-case efficiency and average-speed can be maintained.