Selecting Software Test Data Using Data Flow Information
IEEE Transactions on Software Engineering
A structural test selection criterion
Information Processing Letters
An Applicable Family of Data Flow Testing Criteria
IEEE Transactions on Software Engineering
Comparing test data adequacy criteria
ACM SIGSOFT Software Engineering Notes
Theoretical comparison of testing methods
TAV3 Proceedings of the ACM SIGSOFT '89 third symposium on Software testing, analysis, and verification
A Formal Evaluation of Data Flow Path Selection Criteria
IEEE Transactions on Software Engineering
Partition Testing Does Not Inspire Confidence (Program Testing)
IEEE Transactions on Software Engineering
Analyzing Partition Testing Strategies
IEEE Transactions on Software Engineering
Comparison of program testing strategies
TAV4 Proceedings of the symposium on Testing, analysis, and verification
Assessing the fault-detecting ability of testing methods
SIGSOFT '91 Proceedings of the conference on Software for citical systems
Experimental results from an automatic test case generator
ACM Transactions on Software Engineering and Methodology (TOSEM)
Data Flow Analysis in Software Reliability
ACM Computing Surveys (CSUR)
Data Abstraction, Implementation, Specification, and Testing
ACM Transactions on Programming Languages and Systems (TOPLAS)
Art of Software Testing
A Formal Analysis of the Fault-Detecting Ability of Testing Methods
IEEE Transactions on Software Engineering
Data flow analysis techniques for test data selection
ICSE '82 Proceedings of the 6th international conference on Software engineering
Generating test suites for software load testing
ISSTA '94 Proceedings of the 1994 ACM SIGSOFT international symposium on Software testing and analysis
A simplified domain-testing strategy
ACM Transactions on Software Engineering and Methodology (TOSEM)
An exact array reference analysis for data flow testing
Proceedings of the 18th international conference on Software engineering
On the Expected Number of Failures Detected by Subdomain Testing and Random Testing
IEEE Transactions on Software Engineering
Choosing a testing method to deliver reliability
ICSE '97 Proceedings of the 19th international conference on Software engineering
Software unit test coverage and adequacy
ACM Computing Surveys (CSUR)
Evaluating Testing Methods by Delivered Reliability
IEEE Transactions on Software Engineering
Further empirical studies of test effectiveness
SIGSOFT '98/FSE-6 Proceedings of the 6th ACM SIGSOFT international symposium on Foundations of software engineering
Estimation of software reliability by stratified sampling
ACM Transactions on Software Engineering and Methodology (TOSEM)
Partition Testing vs. Random Testing: The Influence of Uncertainty
IEEE Transactions on Software Engineering
Comparison of delivered reliability of branch, data flow and operational testing: A case study
Proceedings of the 2000 ACM SIGSOFT international symposium on Software testing and analysis
Analysis and Testing of Programs with Exception Handling Constructs
IEEE Transactions on Software Engineering
Finding failures by cluster analysis of execution profiles
ICSE '01 Proceedings of the 23rd International Conference on Software Engineering
Deriving models of software fault-proneness
SEKE '02 Proceedings of the 14th international conference on Software engineering and knowledge engineering
Comparing test sets and criteria in the presence of test hypotheses and fault domains
ACM Transactions on Software Engineering and Methodology (TOSEM)
The Automatic Generation of Load Test Suites and the Assessment of the Resulting Software
IEEE Transactions on Software Engineering
Some Critical Remarks on a Hierarchy of Fault-Detecting Abilities of Test Methods
IEEE Transactions on Software Engineering
On the Relationships Among the All-Uses, All-DU-Paths, and All-Edges Testing Criteria
IEEE Transactions on Software Engineering
A Formal Analysis of the Subsume Relation Between Software Test Adequacy Criteria
IEEE Transactions on Software Engineering
Difficulties Measuring Software Risk in an Industrial Environment
DSN '01 Proceedings of the 2001 International Conference on Dependable Systems and Networks (formerly: FTCS)
Program Segmentation for Controlling Test Coverage
ISSRE '97 Proceedings of the Eighth International Symposium on Software Reliability Engineering
Using Simulation for Assessing the Real Impact of Test Coverage on Defect Coverage
ISSRE '99 Proceedings of the 10th International Symposium on Software Reliability Engineering
Using operational distributions to judge testing progress
Proceedings of the 2003 ACM symposium on Applied computing
On the analytical comparison of testing techniques
ISSTA '04 Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis
ACM Transactions on Software Engineering and Methodology (TOSEM)
Software Testing Research: Achievements, Challenges, Dreams
FOSE '07 2007 Future of Software Engineering
An objective comparison of the cost effectiveness of three testing methods
Information and Software Technology
Software testing: a graph theoretic approach
International Journal of Information and Communication Technology
Comparing the effectiveness of testing techniques
Formal methods and testing
TACAS'06 Proceedings of the 12th international conference on Tools and Algorithms for the Construction and Analysis of Systems
Test generation based on control and data dependencies within system specifications in SDL
Computer Communications
Comparing multi-point stride coverage and dataflow coverage
Proceedings of the 2013 International Conference on Software Engineering
Hi-index | 0.00 |
This paper compares the fault-detecting ability of several software test data adequacy criteria. It has previously been shown that if C/sub 1/ properly covers C/sub 2/, then C/sub 1/ is guaranteed to be better at detecting faults than C/sub 2/, in the following sense: a test suite selected by independent random selection of one test case from each subdomain induced by C/sub 1/ is at least as likely to detect a fault as a test suite similarly selected using C/sub 2/. In contrast, if C/sub 1/ subsumes but does not properly cover C/sub 2/, this is not necessarily the case. These results are used to compare a number of criteria, including several that have been proposed as stronger alternatives to branch testing. We compare the relative fault-detecting ability of data flow testing, mutation testing, and the condition-coverage techniques, to branch testing, showing that most of the criteria examined are guaranteed to be better than branch testing according to two probabilistic measures. We also show that there are criteria that can sometimes be poorer at detecting faults than substantially less expensive criteria.