Software testing techniques (2nd ed.)
Software testing techniques (2nd ed.)
Software unit test coverage and adequacy
ACM Computing Surveys (CSUR)
Proceedings of the 8th European software engineering conference held jointly with 9th ACM SIGSOFT international symposium on Foundations of software engineering
The SLAM project: debugging system software via static analysis
POPL '02 Proceedings of the 29th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
AspectC++: an aspect-oriented extension to the C++ programming language
CRPIT '02 Proceedings of the Fortieth International Conference on Tools Pacific: Objects for internet, mobile and embedded applications
Proceedings of the 25th International Conference on Software Engineering
Testing of Object-Oriented Programs Based on Finite State Machines
APSEC '95 Proceedings of the Second Asia Pacific Software Engineering Conference
Data flow analysis techniques for test data selection
ICSE '82 Proceedings of the 6th international conference on Software engineering
The Art of Software Testing
Source transformation, analysis and generation in TXL
Proceedings of the 2006 ACM SIGPLAN symposium on Partial evaluation and semantics-based program manipulation
Aspectual mixin layers: aspects and features in concert
Proceedings of the 28th international conference on Software engineering
State-based testing of integration aspects
Proceedings of the 2nd workshop on Testing aspect-oriented programs
Interface grammars for modular software model checking
Proceedings of the 2007 international symposium on Software testing and analysis
KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs
OSDI'08 Proceedings of the 8th USENIX conference on Operating systems design and implementation
Hi-index | 0.00 |
The number of potential execution paths through software usually increases dramatically with the size of the program. However, many coding errors only appear while executing few particular paths. Thus, it is a challenge for software testers to select a feasible subset of all paths to be covered in order to find most errors. This article describes a way of using behavioural models (such as state diagrams) for separating concerns in structural testing. Each model describes one concern, such as a usage protocol, a policy or a more complex behaviour. The goal is to get a better and differentiated reliability testimony out of fewer test cases, to find bugs that would probably not manifest themselves otherwise and to provide helpful information for debugging. Opposed to many other approaches that target on a high automation level or on achieving synergies between design and test process, our approach allows for detecting more errors with the same test cases (by means of generated built-in tests) and for selecting better test cases (using adequate coverage criteria). Having to supply the required knowledge in form of models means shifting effort from testing to development.