Optimistic simulation of parallel message-passing applications

  • Authors:
  • Thomas Phan;Rajive Bagrodia

  • Affiliations:
  • The University of California at Los Angeles, Computer Science Department, Los Angeles, CA;The University of California at Los Angeles, Computer Science Department, Los Angeles, CA

  • Venue:
  • Proceedings of the fifteenth workshop on Parallel and distributed simulation
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Optimistic techniques can improve the performance of discrete-event simulations, but one area where optimistic simulators have been unable to show performance improvement is in the simulation of parallel programs. Unfortunately parallel program simulation using direct execution is difficult: the use of direct execution implies that the memory and computation requirements of the simulator are at least as large as that of the target application, which restricts the target systems and application problem sizes that can be studied. Memory usage is especially important for optimistic simulators due to the need for periodic state-saving and rollback. In our research we addressed this problem and have implemented a simulation library running a Time-Warp-based optimistic engine that uses direct execution to simulate and predict the performance of parallel MPI programs while attaining good simulation speedup. For programs with data sets too large to be directly executed with our optimistic simulator, we reduced the memory and computational needs of these programs by utilizing a static task graph and code-slicing methodology; an approach which also exhibited good performance speedup.