Efficient experiment selection in automated software performance evaluations

  • Authors:
  • Dennis Westermann;Rouven Krebs;Jens Happe

  • Affiliations:
  • SAP Research, Karlsruhe, Germany;SAP Research, Karlsruhe, Germany;SAP Research, Karlsruhe, Germany

  • Venue:
  • EPEW'11 Proceedings of the 8th European conference on Computer Performance Engineering
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The performance of today's enterprise applications is influenced by a variety of parameters across different layers. Thus, evaluating the performance of such systems is a time and resource consuming process. The amount of possible parameter combinations and configurations requires many experiments in order to derive meaningful conclusions. Although many tools for automated performance testing are available, controlling experiments and analyzing results still requires large manual effort. In this paper, we apply statistical model inference techniques, namely Kriging and MARS, in order to adaptively select experiments. Our approach automatically selects and conducts experiments based on the accuracy observed for the models inferred from the currently available data. We validated the approach using an industrial ERP scenario. The results demonstrate that we can automatically infer a prediction model with a mean relative error of 1.6% using only 18% of the measurement points in the configuration space.