Improving Computer Architecture Simulation Methodology by Adding Statistical Rigor

  • Authors:
  • Joshua J. Yi;David J. Lilja;Douglas M. Hawkins

  • Affiliations:
  • IEEE;IEEE;-

  • Venue:
  • IEEE Transactions on Computers
  • Year:
  • 2005

Quantified Score

Hi-index 14.99

Visualization

Abstract

Due to cost, time, and flexibility constraints, computer architects use simulators to explore the design space when developing new processors and to evaluate the performance of potential enhancements. However, despite this dependence on simulators, statistically rigorous simulation methodologies are typically not used in computer architecture research. A formal methodology can provide a sound basis for drawing conclusions gathered from simulation results by adding statistical rigor and, consequently, can increase the architect's confidence in the simulation results. This paper demonstrates the application of a rigorous statistical technique to the setup and analysis phases of the simulation process. Specifically, we apply a Plackett and Burman design to: 1) identify key processor parameters, 2) classify benchmarks based on how they affect the processor, and 3) analyze the effect of processor enhancements. Our results showed that, out of the 41 user-configurable parameters in SimpleScalar, only 10 had a significant effect on the execution time. Of those 10, the number of reorder buffer entries and the L2 cache latency were the two most significant ones, by far. Our results also showed that Instruction Precomputation驴a value reuse-like microarchitectural technique驴 primarily improves the processor's performance by relieving integer ALU contention.