Using model checking to generate tests from requirements specifications
ESEC/FSE-7 Proceedings of the 7th European software engineering conference held jointly with the 7th ACM SIGSOFT international symposium on Foundations of software engineering
Model checking
CAV '02 Proceedings of the 14th International Conference on Computer Aided Verification
Come, Let's Play: Scenario-Based Programming Using LSC's and the Play-Engine
Come, Let's Play: Scenario-Based Programming Using LSC's and the Play-Engine
A Practical Tutorial on Modified Condition/Decision Coverage
A Practical Tutorial on Modified Condition/Decision Coverage
Automatic test-case generation from formal models of software
Automatic test-case generation from formal models of software
Assessing and Improving State-Based Class Testing: A Series of Experiments
IEEE Transactions on Software Engineering
Is mutation an appropriate tool for testing experiments?
Proceedings of the 27th international conference on Software engineering
Coverage metrics for requirements-based testing
Proceedings of the 2006 international symposium on Software testing and analysis
Nonparametric Statistics with Applications to Science and Engineering (Wiley Series in Probability and Statistics)
The effect of program and model structure on mc/dc test adequacy coverage
Proceedings of the 30th international conference on Software engineering
Complementary Criteria for Testing Temporal Logic Properties
TAP '09 Proceedings of the 3rd International Conference on Tests and Proofs
On the danger of coverage directed test case generation
FASE'12 Proceedings of the 15th international conference on Fundamental Approaches to Software Engineering
Proceedings of the 34th International Conference on Software Engineering
Hi-index | 0.00 |
Conformance testing in model-based development refers to the testing activity that verifies whether the code generated (manually or automatically) from the model is behaviorally equivalent to the model. Presently the adequacy of conformance testing is inferred by measuring structural coverage achieved over the model. We hypothesize that adequacy metrics for conformance testing should consider structural coverage over the requirementseither in place of or in addition to structural coverage over the model. Measuring structural coverage over the requirements gives a notion of how well the conformance tests exercise the required behavior of the system.We conducted an experiment to investigate the hypothesis stating structural coverage over formal requirements is more effective than structural coverage over the model as an adequacy measure for conformance testing. We found that the hypothesis was rejected at 5% statistical significance on three of the four case examples in our experiment. Nevertheless, we found that the tests providing requirements coverage found several faults that remained undetected by tests providing model coverage. We thus formed a second hypothesis stating that complementing model coverage with requirements coverage will prove more effective as an adequacy measure than solely using model coverage for conformance testing. In our experiment, we found test suites providing both requirements coverage and model coverage to be more effective at finding faults than test suites providing model coverage alone, at 5% statistical significance. Based on our results, we believe existing adequacy measures for conformance testing that only consider model coverage can be strengthened by combining them with rigorous requirements coverage metrics.