Reducing performance non-determinism via cache-aware page allocation strategies
Proceedings of the first joint WOSP/SIPEW international conference on Performance engineering
Repeatability, reproducibility, and rigor in systems research
EMSOFT '11 Proceedings of the ninth ACM international conference on Embedded software
Precise regression benchmarking with random effects: improving mono benchmark results
EPEW'06 Proceedings of the Third European conference on Formal Methods and Stochastic Models for Performance Evaluation
R3: repeatability, reproducibility and rigor
ACM SIGPLAN Notices - Supplemental issue
Proceedings of the Eighth International Workshop on Variability Modelling of Software-Intensive Systems
Hi-index | 0.00 |
Engineering a large software project involves tracking the impact of development and maintenance changes on the software performance. An approach for tracking the impact is regression benchmarking, which involves automated benchmarking and evaluation of performance at regular intervals. Regression benchmarking must tackle the nondeterminism inherent to contemporary computer systems and execution environments and the impact of the nondeterminism on the results. On the example of a fully automated regression benchmarking environment for the Mono opensource project, we show how the problems associated with nondeterminism can be tackled using statistical methods.