On the Use of Transcendentals for Program Testing
Journal of the ACM (JACM)
An Approach to Program Testing
ACM Computing Surveys (CSUR)
Art of Software Testing
Sufficient test sets for path analysis testing strategies
ICSE '81 Proceedings of the 5th international conference on Software engineering
A partition analysis method to increase program reliability
ICSE '81 Proceedings of the 5th international conference on Software engineering
ICSE '81 Proceedings of the 5th international conference on Software engineering
SELECT—a formal system for testing and debugging programs by symbolic execution
Proceedings of the international conference on Reliable software
The coupling effect: fact or fiction
TAV3 Proceedings of the ACM SIGSOFT '89 third symposium on Software testing, analysis, and verification
Constraint-Based Automatic Test Data Generation
IEEE Transactions on Software Engineering
Experimental results from an automatic test case generator
ACM Transactions on Software Engineering and Methodology (TOSEM)
A neural net based approach to Test Oracle
ACM SIGSOFT Software Engineering Notes
Comment on "The application of error-sensitive testing strategies to debugging"
ACM SIGSOFT Software Engineering Notes
Hi-index | 0.00 |
Program errors can be considered from two perspectives—cause and effect. The goal of program testing is to detect errors by discovering their effects, while the goal of debugging is to search for the associated cause. In this paper, explore ways in which some of the results of testing research can be applied to the debugging process. In particular, computation testing and domain testing, which are two error-sensitive test data selection strategies, are described. Ways in which these selection strategies can be used as debugging aids are then discussed.