Applying statistical physics to performance analysis of large-scale computing systems (abstract)

  • Authors:
  • Eugene Pinsky

  • Affiliations:
  • Computer Science Department, Boston University

  • Venue:
  • CSC '90 Proceedings of the 1990 ACM annual conference on Cooperation
  • Year:
  • 1990

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose to use tools and methods of statistical mechanics to introduce a unified computational approach to analyze the performance of large-scale computing systems. Just as in statistical physics, we desire a small amount of “macroscopic” information about the system (“thermo-dynamic”) averages (e.g. average concurrency, through-put, blocking rates) despite the vast complexity of “microscopic” interactions. It seems natural, therefore, that the complexity existing in large-scale computing systems is the sort of complexity that ought to yield to some sort of treatment analogous to statistical mechanics.We model a computation (the allocation of tasks to processors) as analogous to a physical activity in some structured space and look for equilibrium statistics which are summarized in the corresponding partition function. This partition function, analogous to the partition function of a Gibbs canonical ensemble in statistical physics, reflects the topology of the system, load distribution, queueing and other relevant description of the system.We use analogies from physics to develop fast approximation methods to compute the system performance measures and use analogy to the thermodynamic limit to analyze critical phase transitions in computing systems. These transitions manifest themselves in a global change of state as a result of local interactions and are a direct result of scale.