Auto-pilot: a platform for system software benchmarking

  • Authors:
  • Charles P. Wright;Nikolai Joukov;Devaki Kulkarni;Yevgeniy Miretskiy;Erez Zadok

  • Affiliations:
  • Stony Brook University;Stony Brook University;Stony Brook University;Stony Brook University;Stony Brook University

  • Venue:
  • ATEC '05 Proceedings of the annual conference on USENIX Annual Technical Conference
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

When developing software, it is essential to evaluate its performance and stability, making benchmarking an essential and significant part of the software development cycle. Benchmarking is also used to show that a system is useful or provide insight into how systems behave. However, benchmarking is a tedious task that few enjoy, but every programmer or systems researcher must do. Developers need an easy-to-use system for collecting and analyzing benchmark results. We introduce Auto-pilot, a tool for producing accurate and informative benchmark results. Auto-pilot provides an infrastructure for running tests, sample test scripts, and analysis tools. Auto-pilot is not just another metric or benchmark: it is a system for automating the repetitive tasks of running, measuring, and analyzing the results of arbitrary programs. Auto-pilot can run a given test until results stabilize, automatically highlight outlying results, and automatically detect memory leaks. We have used Autopilot for over three years on eighteen distinct projects and have found it to be an invaluable tool that saved us significant effort.