The program dependence graph in static program testing
Information Processing Letters
The complete guide to system testing
The complete guide to system testing
Interprocedural slicing using dependence graphs
ACM Transactions on Programming Languages and Systems (TOPLAS)
Automated Software Test Data Generation
IEEE Transactions on Software Engineering
Constraint-Based Automatic Test Data Generation
IEEE Transactions on Software Engineering
Selecting tests and identifying test coverage requirements for modified software
ISSTA '94 Proceedings of the 1994 ACM SIGSOFT international symposium on Software testing and analysis
The chaining approach for software test data generation
ACM Transactions on Software Engineering and Methodology (TOSEM)
Automated test data generation for programs with procedures
ISSTA '96 Proceedings of the 1996 ACM SIGSOFT international symposium on Software testing and analysis
Analyzing Regression Test Selection Techniques
IEEE Transactions on Software Engineering
Test Manager: A Regression Testing Tool
ICSM '93 Proceedings of the Conference on Software Maintenance
SELECT—a formal system for testing and debugging programs by symbolic execution
Proceedings of the international conference on Reliable software
Automatic generation of random self-checking test cases
IBM Systems Journal
IEEE Transactions on Software Engineering
An overview of regression testing
ACM SIGSOFT Software Engineering Notes
A Simple and Practical Approach to Unit Testing: The JML and JUnit Way
ECOOP '02 Proceedings of the 16th European Conference on Object-Oriented Programming
Regression Testing on Object-Oriented Programs
ISSRE '99 Proceedings of the 10th International Symposium on Software Reliability Engineering
Checking Inside the Black Box: Regression Testing by Comparing Value Spectra
IEEE Transactions on Software Engineering
Automated Test Generation for Access Control Policies via Change-Impact Analysis
SESS '07 Proceedings of the Third International Workshop on Software Engineering for Secure Systems
Towards a Framework for Differential Unit Testing of Object-Oriented Programs
AST '07 Proceedings of the Second International Workshop on Automation of Software Test
Model-based regression test suite generation using dependence analysis
Proceedings of the 3rd international workshop on Advances in model-based testing
BERT: BEhavioral Regression Testing
WODA '08 Proceedings of the 2008 international workshop on dynamic analysis: held in conjunction with the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2008)
Extended firewall for regression testing: an experience report
Journal of Software Maintenance and Evolution: Research and Practice
Regression test suite prioritization using system models
Software Testing, Verification & Reliability
Active continuous quality control
Proceedings of the 16th International ACM Sigsoft symposium on Component-based software engineering
Partition-based regression verification
Proceedings of the 2013 International Conference on Software Engineering
Hi-index | 0.00 |
Regression testing involves testing the modified program in order to establish the confidence in the modifications. Existing regression testing methods generate test cases to satisfy selected testing criteria in the hope that this process may reveal faults in the modified program. In this paper we present a novel approach of automated regression test generation in which all generated test cases uncover an error(s). This approach is used to test the common functionality of the original program and its modified version, i.e., it is used for programs whose functionality is unchanged after modifications. The goal in this approach is to identify test cases for which the original program and the modified program produce different outputs. If such a test is found, then this test uncovers an error. The problem of finding such a test case may be reduced to the problem of finding program input on which a selected statement is executed. As a result, existing methods of automated test data generation for white-box testing may be used to generate these tests. Our experiments have shown that our approach may improve the chances of finding software errors as compared to the existing methods of regression testing. The advantage of our approach is that it is fully automated and that all generated test cases reveal an error(s).