One evaluation of model-based testing and its automation

  • Authors:
  • A. Pretschner;W. Prenninger;S. Wagner;C. Kühnel;M. Baumgartner;B. Sostawa;R. Zölch;T. Stauner

  • Affiliations:
  • ETH Zürich, IFW C45.2, ETH Zentrum, Zürich, Switzerland;Institut für Informatik, TU München, Garching, Germany;Institut für Informatik, TU München, Garching, Germany;Institut für Informatik, TU München, Garching, Germany;BMW AG, EI-20, München, Germany;BMW AG, EI-20, München, Germany;BMW AG, EI-20, München, Germany;BMW CarIT GmbH, München, Germany

  • Venue:
  • Proceedings of the 27th international conference on Software engineering
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Model-based testing relies on behavior models for the generation of model traces: input and expected output---test cases---for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than hand-crafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of model-based tests led to an 11% increase in detected errors.