An Analysis of Scatter Decomposition

  • Authors:
  • David M. Nicol;Joel H. Saltz

  • Affiliations:
  • College of William and Mary, Williamsburg, VA;Institute for Computer Applications in Science and Engineering M.S., Hampton, VA

  • Venue:
  • IEEE Transactions on Computers
  • Year:
  • 1990

Quantified Score

Hi-index 14.98

Visualization

Abstract

A formal analysis of a powerful mapping technique known as scatter decomposition is provided. Scatter decomposition divides an irregular computational domain into a large number of equally sized pieces and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why and when scatter decomposition works. The first result is that if a correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is a stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remain zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. It is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance.