Digital design (2nd ed.)
On the Relationship Between Partition and Random Testing
IEEE Transactions on Software Engineering
Software error analysis: a real case study involving real faults and mutations
ISSTA '96 Proceedings of the 1996 ACM SIGSOFT international symposium on Software testing and analysis
A logic-model semantics for SCR software requirements
ISSTA '96 Proceedings of the 1996 ACM SIGSOFT international symposium on Software testing and analysis
Art of Software Testing
Experimental Evaluation of the Variation in Effectiveness for DC, FPC and MC/DC Test Criteria
ISESE '03 Proceedings of the 2003 International Symposium on Empirical Software Engineering
Tolerance of Control-Flow Testing Criteria
COMPSAC '03 Proceedings of the 27th Annual International Conference on Computer Software and Applications
An extended fault class hierarchy for specification-based testing
ACM Transactions on Software Engineering and Methodology (TOSEM)
A comparison of MC/DC, MUMCUT and several other coverage criteria for logical decisions
Journal of Systems and Software - Special issue: Quality software
IEEE Transactions on Software Engineering
International Journal on Software Tools for Technology Transfer (STTT)
Software Engineering: A Practitioner's Approach
Software Engineering: A Practitioner's Approach
Hi-index | 0.00 |
Various test case selection criteria have been proposed for quality testing of software. It is a common phenomenon that test sets satisfying different criteria have different sizes and fault-detecting ability. Moreover, test sets that satisfy a stronger criterion and detect more faults usually consist of more test cases. A question that often puzzles software testing professionals and researchers is: when a testing criterion C 1 helps to detect more faults than another criterion C 2, is it because C 1 specifically requires test cases that are more fault-sensitive than those for C 2, or is it essentially because C 1 requires more test cases than C 2? In this paper, we discuss several methods and approaches for investigating this question, and empirically compare several common coverage criteria for testing logical decisions, taking into consideration the different sizes of the test sets required by these criteria. Our results demonstrate evidently that the stronger criteria under study are more fault-sensitive than the weaker ones, not solely because the former require more test cases. More importantly, we have illustrated a general approach, which takes into account the size factor of the generated test sets, for demonstrating the superiority of a testing criterion over another.