An Automated Benchmarking Toolset

  • Authors:
  • Michel Courson;Alan Mink;Guillaume Marçais;Benjamin Traverse

  • Affiliations:
  • -;-;-;-

  • Venue:
  • HPCN Europe 2000 Proceedings of the 8th International Conference on High-Performance Computing and Networking
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

The drive for performance in parallel computing and the need to evaluate platform upgrades or replacements are major reasons frequent running of benchmark codes has become commonplace for application and platform evaluation and tuning. NIST is developing a prototype for an automated benchmarking toolset to reduce the manual effort in running and analyzing the results of such benchmarks. Our toolset consists of three main modules. A Data Collection and Storage module handles the collection of performance data and implements a central repository for such data. Another module provides an integrated mechanism to analyze and visualize the data stored in the repository. An Experiment Control Module assists the user in designing and executing experiments. To reduce the development effort this toolset is built around existing tools and is designed to be easily extensible to support other tools.