Statecharts: A visual formalism for complex systems
Science of Computer Programming
Requirements Specification for Process-Control Systems
IEEE Transactions on Software Engineering
Using model checking to generate tests from requirements specifications
ESEC/FSE-7 Proceedings of the 7th European software engineering conference held jointly with the 7th ACM SIGSOFT international symposium on Foundations of software engineering
Specification-based prototyping for embedded systems
ESEC/FSE-7 Proceedings of the 7th European software engineering conference held jointly with the 7th ACM SIGSOFT international symposium on Foundations of software engineering
Model checking
Proceedings of the 22nd international conference on Software engineering
A Specification-Based Coverage Metric to Evaluate Test Sets
HASE '99 The 4th IEEE International Symposium on High-Assurance Systems Engineering
Test Generation for Intelligent Networks Using Model Checking
TACAS '97 Proceedings of the Third International Workshop on Tools and Algorithms for Construction and Analysis of Systems
A Temporal Logic Based Theory of Test Coverage and Generation
TACAS '02 Proceedings of the 8th International Conference on Tools and Algorithms for the Construction and Analysis of Systems
Data flow testing as model checking
Proceedings of the 25th International Conference on Software Engineering
Using Model Checking to Generate Tests from Specifications
ICFEM '98 Proceedings of the Second IEEE International Conference on Formal Engineering Methods
A Practical Tutorial on Modified Condition/Decision Coverage
A Practical Tutorial on Modified Condition/Decision Coverage
Test-Suite Reduction for Model Based Tests: Effects on Test Quality and Implications for Testing
Proceedings of the 19th IEEE international conference on Automated software engineering
One evaluation of model-based testing and its automation
Proceedings of the 27th international conference on Software engineering
On the integration of design and test: a model-based approach for embedded systems
Proceedings of the 2006 international workshop on Automation of software test
Achieving communication coverage in testing
ACM SIGSOFT Software Engineering Notes
Property relevant software testing with model-checkers
ACM SIGSOFT Software Engineering Notes
Safety and Software Intensive Systems: Challenges Old and New
FOSE '07 2007 Future of Software Engineering
On the effect of test-suite reduction on automatically generated model-based tests
Automated Software Engineering
Using LTL rewriting to improve the performance of model-checker based test-case generation
Proceedings of the 3rd international workshop on Advances in model-based testing
Requirements-based test case specification by using information from model construction
Proceedings of the 3rd international workshop on Automation of software test
Automated Test Generation and Verified Software
Verified Software: Theories, Tools, Experiments
State coverage: software validation metrics beyond code coverage
SOFSEM'12 Proceedings of the 38th international conference on Current Trends in Theory and Practice of Computer Science
On the danger of coverage directed test case generation
FASE'12 Proceedings of the 15th international conference on Fundamental Approaches to Software Engineering
Proceedings of the 34th International Conference on Software Engineering
A testbench specification language for SystemC verification
Proceedings of the eighth IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis
Journal of Systems and Software
Hi-index | 0.00 |
The successful analysis technique model checking can be employed as a test-case generation technique to generate tests from formal models. When using a model checker for test case generation, we leverage the witness (or counterexample) generation capability of model-checkers for constructing test cases. Test criteria are expressed as temporal properties and the witness traces generated for these properties are instantiated to create complete test sequences, satisfying the criteria. In this report we describe an experiment where we investigate the fault finding capability of test suites generated to provide three specification coverage metrics proposed in the literature (state, transition, and decision coverage). Our findings indicate that although the coverage may seem reasonable to measure the adequacy of a test suite, they are unsuitable when used to generate test suites. In short, the generated test sequences technically provide adequate coverage, but do so in a way that tests only a small portion of the formal model. We conclude that automated testing techniques must be pursued with great caution and that new coverage criteria targeting formal specifications are needed.