Dynamic testing via automata learning

  • Authors:
  • Harald Raffelt;Bernhard Steffen;Tiziana Margaria

  • Affiliations:
  • University of Dortmund, Programming Systems, Dortmund, Germany;University of Dortmund, Programming Systems, Dortmund, Germany;Services and Software Engineering, Universität Potsdam, Potsdam, Germany

  • Venue:
  • HVC'07 Proceedings of the 3rd international Haifa verification conference on Hardware and software: verification and testing
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents dynamic testing, a method that exploits automata learning to systematically test (black box) systems almost without prerequisites. Based on interface descriptions, our method successively explores the system under test (SUT), while it at the same time extrapolates a behavioral model. This is in turn used to steer the further exploration process. Due to the applied learning technique, our method is optimal in the sense that the extrapolated models are most concise in consistently representing all the information gathered during the exploration. Using the LearnLib, our framework for automata learning, our method can elegantly be combined with numerous optimizations of the learning procedure, various choices of model structures, and last but not least, with the option to dynamically/interactively enlarge the alphabet underlying the learning process. All these features will be illustrated using as a case study the web application Mantis, a bug tracking system widely used in practice. We will show how the dynamic testing procedure proceeds and how the behavioral models arise that concisely summarize the current testing effort. It has turned out that these models, besides steering the automatic exploration process, are ideal for user guidance and to support analyzes to improve the system understanding.