Increasing functional coverage by inductive testing: a case study
ICTSS'10 Proceedings of the 22nd IFIP WG 6.1 international conference on Testing software and systems
From ZULU to RERS: lessons learned in the ZULU challenge
ISoLA'10 Proceedings of the 4th international conference on Leveraging applications of formal methods, verification, and validation - Volume Part I
Automated learning setups in automata learning
ISoLA'12 Proceedings of the 5th international conference on Leveraging Applications of Formal Methods, Verification and Validation: technologies for mastering change - Volume Part I
Active continuous quality control
Proceedings of the 16th International ACM Sigsoft symposium on Component-based software engineering
Hi-index | 0.00 |
This paper presents dynamic testing, a method that exploits automata learning to systematically test (black box) systems almost without prerequisites. Based on interface descriptions and optional sample test cases, our method successively explores the system under test (SUT), in order to extrapolate a behavioural model. This is in turn used to steer the further exploration process. Due to the applied learning technique, our method is optimal in the sense that the extrapolated models are most concise (i.e. state minimal) in consistently representing all the information gathered during the exploration. Using the LearnLib, our framework for automata learning, our method can be elegantly combined with numerous optimisations of the learning procedure, with various choices of model structures, and with the option of dynamically/interactively enlarging the alphabet underlying the learning process. The latter is important in the Web context, where totally new situations may arise when following links. All these features are illustrated using as a case study the web application Mantis, a bug tracking system widely used in practice. In addition, we present another case study that demonstrates the scalability of the approach. We show how the dynamic testing procedure works and how behavioural models arise that concisely summarize the current testing effort. It turns out that these models reveal the system structure from a user perspective. Besides steering the automatic exploration process, they are ideal for user guidance and to support analyses to improve the system understanding, as they reveal the system structure from a user perspective.