Using model checking to generate tests from requirements specifications
ESEC/FSE-7 Proceedings of the 7th European software engineering conference held jointly with the 7th ACM SIGSOFT international symposium on Foundations of software engineering
A Comparison of Some Structural Testing Strategies
IEEE Transactions on Software Engineering
Experimental Evaluation of the Variation in Effectiveness for DC, FPC and MC/DC Test Criteria
ISESE '03 Proceedings of the 2003 International Symposium on Empirical Software Engineering
Generating Efficient Test Sets with a Model Checker
SEFM '04 Proceedings of the Software Engineering and Formal Methods, Second International Conference
Coverage metrics for requirements-based testing
Proceedings of the 2006 international symposium on Software testing and analysis
A comparison of MC/DC, MUMCUT and several other coverage criteria for logical decisions
Journal of Systems and Software - Special issue: Quality software
ACM SIGSOFT Software Engineering Notes
The effect of program and model structure on mc/dc test adequacy coverage
Proceedings of the 30th international conference on Software engineering
On the danger of coverage directed test case generation
FASE'12 Proceedings of the 15th international conference on Fundamental Approaches to Software Engineering
Hi-index | 0.00 |
Chilenski and Miller [1] claim that the error detection probability of a test set with full modified condition/decision coverage (MC/DC) on the system under test converges to 100% for an increasing number of test cases, but there are also examples where the error detection probability of an MC/DC adequate test set is indeed zero. In this work we analyze the effective error detection rate of a test set that achieves maximum possible MC/DC on the code for a case study from the automotive domain. First we generate the test cases automatically with a model checker. Then we mutate the original program to generate three different error scenarios: the first error scenario focuses on errors in the value domain, the second error scenario focuses on errors in the domain of the variable names and the third error scenario focuses on errors within the operators of the boolean expressions in the decisions of the case study. Applying the test set to these mutated program versions shows that all errors of the values are detected, but the error detection rate for mutated variable names or mutated operators is quite disappointing (for our case study 22% of the mutated variable names, resp. 8% of the mutated operators are not detected by the original MC/DC test set). With this work we show that testing a system with a test set that achieves maximum possible MC/DC on the code detects less errors than expected.