An Empirical Investigation of the Effort of Creating Reusable, Component-Based Models for Performance Prediction

  • Authors:
  • Anne Martens;Steffen Becker;Heiko Koziolek;Ralf Reussner

  • Affiliations:
  • Chair for Software Design and Quality, University of Karlsruhe (TH), Karlsruhe, Germany 76131;FZI Forschungszentrum Informatik, Karlsruhe, Germany 76131;ABB Corporate Research, Ladenburg, Germany 68526;Chair for Software Design and Quality, University of Karlsruhe (TH), Karlsruhe, Germany 76131

  • Venue:
  • CBSE '08 Proceedings of the 11th International Symposium on Component-Based Software Engineering
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Model-based performance prediction methods aim at evaluating the expected response time, throughput, and resource utilisation of a software system at design time, before implementation. Existing performance prediction methods use monolithic, throw-away prediction models or component-based, reusable prediction models. While it is intuitively clear that the development of reusable models requires more effort, the actual higher amount of effort has not been quantified or analysed systematically yet. To study the effort, we conducted a controlled experiment with 19 computer science students who predicted the performance of two example systems applying an established, monolithic method (Software Performance Engineering) as well as our own component-based method (Palladio). The results show that the effort of model creation with Palladio is approximately 1.25 times higher than with SPE in our experimental setting, with the resulting models having comparable prediction accuracy. Therefore, in some cases, the creation of reusable prediction models can already be justified, if they are reused at least once.