Assessing computer performance with stocs

  • Authors:
  • Leonardo Piga;Gabriel F.T. Gomes;Rafael Auler;Bruno Rosa;Sandro Rigo;Edson Borin

  • Affiliations:
  • University of Campinas, Campinas, Brazil;University of Campinas, Campinas, Brazil;University of Campinas, Campinas, Brazil;University of Campinas, Campinas, Brazil;University of Campinas, Campinas, Brazil;University of Campinas, Campinas, Brazil

  • Venue:
  • Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Several aspects of a computer system cause performance measurements to include random errors. Moreover, these systems are typically composed of a non-trivial combination of individual components that may cause one system to perform better or worse than another depending on the workload. Hence, properly measuring and comparing computer systems performance are non-trivial tasks. The majority of work published on recent major computer architecture conferences do not report the random errors measured on their experiments. The few remaining authors have been using only confidence intervals or standard deviations to quantify and factor out random errors. Recent publications claim that this approach could still lead to misleading conclusions. In this work, we reproduce and discuss the results obtained in previous study. Finally, we propose SToCS, a tool that integrates several statistical frameworks and facilitates the analysis of computer science experiments.