Separating testing concerns by means of models

  • Authors:
  • Dirk Wischermann;Wolfgang Schröder-Preikschat

  • Affiliations:
  • Friedrich-Alexander University, Erlangen, Germany;Friedrich-Alexander University, Erlangen, Germany

  • Venue:
  • Proceedings of the 1st Workshop on Testing Object-Oriented Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The number of potential execution paths through software usually increases dramatically with the size of the program. However, many coding errors only appear while executing few particular paths. Thus, it is a challenge for software testers to select a feasible subset of all paths to be covered in order to find most errors. This article describes a way of using behavioural models (such as state diagrams) for separating concerns in structural testing. Each model describes one concern, such as a usage protocol, a policy or a more complex behaviour. The goal is to get a better and differentiated reliability testimony out of fewer test cases, to find bugs that would probably not manifest themselves otherwise and to provide helpful information for debugging. Opposed to many other approaches that target on a high automation level or on achieving synergies between design and test process, our approach allows for detecting more errors with the same test cases (by means of generated built-in tests) and for selecting better test cases (using adequate coverage criteria). Having to supply the required knowledge in form of models means shifting effort from testing to development.