Brittle Metrics in Operating Systems Research

  • Authors:
  • Jeffrey C. Mogul

  • Affiliations:
  • -

  • Venue:
  • HOTOS '99 Proceedings of the The Seventh Workshop on Hot Topics in Operating Systems
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most operating systems research publications make claims about performance. We expect these performance claims to be both repeatable and relevant to important applications. We also expect them to be comparable to similar claims made in other papers. This implies the need for realistic and widely-used benchmarks.Often, however, no such benchmark exists. The problem is especially acute in application areas with significant external latencies (such as Internet servers and file systems). Sometimes this leads researchers to measure only what is easily measured. Sometimes it leads to the naive use of unrealistic benchmarks, causing research to be diverted from solving actual problems.A serious project to put operating systems research on a sound quantitative basis requires that we make an explicit effort to develop repeatable, comparable, and realistic metrics for evaluating operating systems research. We must also develop reliable techniques for using benchmark results to predict real-world performance.