Comparison of program testing strategies
TAV4 Proceedings of the symposium on Testing, analysis, and verification
On the inadequacies of data state space sampling as a measure of the trustworthiness of programs
ACM SIGSOFT Software Engineering Notes
Estimating the Probability of Failure When Testing Reveals No Failures
IEEE Transactions on Software Engineering
Specifying operational profiles for modules
ISSTA '93 Proceedings of the 1993 ACM SIGSOFT international symposium on Software testing and analysis
Provable Improvements on Branch Testing
IEEE Transactions on Software Engineering
Automatically Generating Test Data from a Boolean Specification
IEEE Transactions on Software Engineering
Automatic Test Generation using Checkpoint Encoding and Antirandom Testing
ISSRE '97 Proceedings of the Eighth International Symposium on Software Reliability Engineering
Using operational distributions to judge testing progress
Proceedings of the 2003 ACM symposium on Applied computing
The influence of multiple artifacts on the effectiveness of software testing
Proceedings of the IEEE/ACM international conference on Automated software engineering
Programs, tests, and oracles: the foundations of testing revisited
Proceedings of the 33rd International Conference on Software Engineering
Test data variance as a test quality measure: exemplified for TTCN-3
TestCom'07/FATES'07 Proceedings of the 19th IFIP TC6/WG6.1 international conference, and 7th international conference on Testing of Software and Communicating Systems
Hi-index | 0.00 |
Test data adequacy criteria have been compared in a multitude of ways in the literature, ranging from the relative difficulty of satisfying them to the relative probability that test sets that satisfy them will expose errors in programs. Each method of comparison gives rise to an ordering of criteria, many of which differ significantly from the others. We investigate the various methods of comparing criteria, and show how the induced orderings are related. There are presently no methods of comparison that are based on the cost of using criteria; we propose a formal model of cost comparison of criteria. We categorize methods of comparison as being satisfiability based, correctness based, or complexity based.