Constraint-Based Automatic Test Data Generation
IEEE Transactions on Software Engineering
Re-estimation of software reliability after maintenance
ICSE '97 Proceedings of the 19th international conference on Software engineering
Automated regression test generation
Proceedings of the 1998 ACM SIGSOFT international symposium on Software testing and analysis
An empirical study of regression test selection techniques
Proceedings of the 20th international conference on Software engineering
Prioritizing Test Cases For Regression Testing
IEEE Transactions on Software Engineering
A Differencing Algorithm for Object-Oriented Programs
Proceedings of the 19th IEEE international conference on Automated software engineering
Chianti: a tool for change impact analysis of java programs
OOPSLA '04 Proceedings of the 19th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications
Scaling regression testing to large software systems
Proceedings of the 12th ACM SIGSOFT twelfth international symposium on Foundations of software engineering
Selective capture and replay of program executions
WODA '05 Proceedings of the third international workshop on Dynamic analysis
Automatic test factoring for java
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering
Isolating relevant component interactions with JINSI
Proceedings of the 2006 international workshop on Dynamic systems analysis
From daikon to agitator: lessons and challenges in building a commercial tool for developer testing
Proceedings of the 2006 international symposium on Software testing and analysis
MATRIX: Maintenance-Oriented Testing Requirements Identifier and Examiner
TAIC-PART '06 Proceedings of the Testing: Academic & Industrial Conference on Practice And Research Techniques
Carving differential unit test cases from system test cases
Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering
Randomized Differential Testing as a Prelude to Formal Verification
ICSE '07 Proceedings of the 29th international conference on Software Engineering
Towards a Framework for Differential Unit Testing of Object-Oriented Programs
AST '07 Proceedings of the Second International Workshop on Automation of Software Test
Automated testing of refactoring engines
Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
Differential testing: a new approach to change detection
Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
Randoop: feedback-directed random testing for Java
Companion to the 22nd ACM SIGPLAN conference on Object-oriented programming systems and applications companion
Controllable combinatorial coverage in grammar-based testing
TestCom'06 Proceedings of the 18th IFIP TC6/WG6.1 international conference on Testing of Communicating Systems
Augmenting automatically generated unit-test suites with regression oracle checking
ECOOP'06 Proceedings of the 20th European conference on Object-Oriented Programming
Mutation-driven generation of unit tests and oracles
Proceedings of the 19th international symposium on Software testing and analysis
BERT: a tool for behavioral regression testing
Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering
Hi-index | 0.01 |
During maintenance, it is common to run the new version of a program against its existing test suite to check whether the modifications in the program introduced unforeseen side effects. Although this kind of regression testing can be effective in identifying some change-related faults, it is limited by the quality of the existing test suite. Because generating tests for real programs is expensive, developers build test suites by finding acceptable tradeoffs between cost and thoroughness of the tests. Such test suites necessarily target only a small subset of the program's functionality and may miss many regression faults. To address this issue, we introduce the concept of behavioral regression testing, whose goal is to identify behavioral differences between two versions of a program through dynamic analysis. Intuitively, given a set of changes in the code, behavioral regression testing works by (1) generating a large number of test cases that focus on the changed parts of the code, (2) running the generated test cases on the old and new versions of the code and identifying differences in the tests' outcome, and (3) analyzing the identified differences and presenting them to the developers. By focusing on a subset of the code and leveraging differential behavior, our approach can provide developers with more (and more focused) information than traditional regression testing techniques. This paper presents our approach and performs a preliminary assessment of its feasibility.