Utility Analysis of Parallel Simulation

  • Authors:
  • David M. Nicol

  • Affiliations:
  • Dartmouth College, Hanover, NH

  • Venue:
  • Proceedings of the seventeenth workshop on Parallel and distributed simulation
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Parallel computers are used to execute discrete-event simulations in contexts where a serial computer is unable to provide answers fast enough, and/or is unable to hold the simulation state in memory. Traditional research in parallel simulation has focused on the degree to which a parallel simulator provides speedup. This paper takes a different view and asks how a parallel simulator provides increased user defined utility as a result of being able to simulatelarger problem sizes. We develop a model where the utility of simulating a particular simulation is an increasing function of the problem size, and ask whether overall utility accrues faster on a parallel computer if one uses it to simulate one large problem in parallel, several smaller problem instancesconcurrently and each in parallel, or concurrently many small problem instances on single processors. We show that under our model assumptions, utility is accrued faster either by running one large problem instance in parallel using all the available processors, or by running one small problem instance per processor, concurrently. When we consider how to optimize the utility per unit cost we find that one either runs a large problem using all available processors, multiple small problems with one per processor, or a small problem using exactly one processor. Determination of the optimal configuration dependson the user's assessment of how rapidly utility grows with the problem size. Our main contribution is to show the linkage between the effectiveness of parallel simulationand a user's perception of the value of larger problem sizes. We show that if that utility grows less than linearly in the problem size, then use of parallelism is sub-optimal. Wegive precise relationships between our model parameters that govern when parallelism optimizes utility, and when it optimizes price-performance. We see that when model parameters are in a "normal" range, a user's perception of utility must grow significantly-e.g. proportional to problem size raised to the 1.5th power-for parallel processing to optimize cost performance.