Poster: FLAMBES: evolving fast performance models

  • Authors:
  • Adam Crume;Carlos Maltzahn;Jason Cope;Sam Lang;Rob Ross;Phil Carns;Chris Carothers;Ning Liu;Curtis Janssen;John Bent;Stephan Eidenbenz;Meghan McClelland

  • Affiliations:
  • University of California, Santa Cruz, CA, USA;University of California, Santa Cruz, CA, USA;Argonne National Laboratory, Argonne, IL, USA;Argonne National Laboratory, Argonne, IL, USA;Argonne National Laboratory, Argonne, IL, USA;Argonne National Laboratory, Argonne, IL, USA;Rensselaer Polytechnic Institute, Troy, NY, USA;Rensselaer Polytechnic Institute, Troy, NY, USA;Sandia National Labs, Livermore, CA, USA;Los Alamos National Lab, Los Alamos, NM, USA;Los Alamos National Lab, Los Alamos, NM, USA;Los Alamos National Lab, Los Alamos, NM, USA

  • Venue:
  • Proceedings of the 2011 companion on High Performance Computing Networking, Storage and Analysis Companion
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Large clusters and supercomputers are simulated to aid in design. Many devices, such as hard drives, are slow to simulate. Our approach is to use a genetic algorithm to fit parameters for an analytical model of a device. Fitting focuses on aggregate accuracy rather than request-level accuracy since individual request times are irrelevant in large simulations. The model is fitted to traces from a physical device or a known device-accurate model. This is done once, offline, before running the simulation. Execution of the model is fast, since it only requires a modest amount of floating point math and no event queueing. Only a few floating point numbers are needed for state. Compared to an event-driven model, this trades a little accuracy for a large gain in performance.