PIE: A Dynamic Failure-Based Technique
IEEE Transactions on Software Engineering
DART: directed automated random testing
Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation
MATRIX: Maintenance-Oriented Testing Requirements Identifier and Examiner
TAIC-PART '06 Proceedings of the Testing: Academic & Industrial Conference on Practice And Research Techniques
StackGuard: automatic adaptive detection and prevention of buffer-overflow attacks
SSYM'98 Proceedings of the 7th conference on USENIX Security Symposium - Volume 7
Differential symbolic execution
Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering
Test-Suite Augmentation for Evolving Software
ASE '08 Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering
Execution synthesis: a technique for automated software debugging
Proceedings of the 5th European conference on Computer systems
Has the bug really been fixed?
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1
Precisely Detecting Runtime Change Interactions for Evolving Software
ICST '10 Proceedings of the 2010 Third International Conference on Software Testing, Verification and Validation
KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs
OSDI'08 Proceedings of the 8th USENIX conference on Operating systems design and implementation
Test generation to expose changes in evolving programs
Proceedings of the IEEE/ACM international conference on Automated software engineering
Directed test suite augmentation: techniques and tradeoffs
Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering
Coverage guided systematic concurrency testing
Proceedings of the 33rd International Conference on Software Engineering
Directed test suite augmentation
Proceedings of the 33rd International Conference on Software Engineering
Applying Aggressive Propagation-Based Strategies for Testing Changes
ICST '11 Proceedings of the 2011 Fourth IEEE International Conference on Software Testing, Verification and Validation
Directed incremental symbolic execution
Proceedings of the 32nd ACM SIGPLAN conference on Programming language design and implementation
eXpress: guided path exploration for efficient regression test generation
Proceedings of the 2011 International Symposium on Software Testing and Analysis
SAS'11 Proceedings of the 18th international conference on Static analysis
BugRedux: reproducing field failures for in-house debugging
Proceedings of the 34th International Conference on Software Engineering
make test-zesti: a symbolic execution solution for improving regression testing
Proceedings of the 34th International Conference on Software Engineering
Modeling Software Execution Environment
WCRE '12 Proceedings of the 2012 19th Working Conference on Reverse Engineering
Partition-based regression verification
Proceedings of the 2013 International Conference on Software Engineering
Hi-index | 0.00 |
Changes often introduce program errors, and hence recent software testing literature has focused on generating tests which stress changes. In this paper, we argue that changes cannot be treated as isolated program artifacts which are stressed via testing. Instead, it is the complex dependency across multiple changes which introduce subtle errors. Furthermore, the complex dependence structures, that need to be exercised to expose such errors, ensure that they remain undiscovered even in well tested and deployed software. We motivate our work based on empirical evidence from a well tested and stable project - Linux GNU Coreutils - where we found that one third of the regressions take more than two (2) years to be fixed, and that two thirds of such long-standing regressions are introduced due to change interactions for the utilities we investigated. To combat change interaction errors, we first define a notion of change interaction where several program changes are found to affect the result of a program statement via program dependencies. Based on this notion, we propose a change sequence graph (CSG) to summarize the control-flow and dependencies across changes. The CSG is then used as a guide during program path exploration via symbolic execution - thereby efficiently producing test cases which witness change interaction errors. Our experimental infra-structure was deployed on various utilities of GNU Coreutils, which have been distributed with Linux for almost twenty years. Apart from finding five (5) previously unknown errors in the utilities, we found that only one in five generated test cases exercises a sequence that is critical to exposing a change-interaction error, while being an order of magnitude more likely to expose an error. On the other hand, stressing changes in isolation only exposed half of the change interaction errors. These results demonstrate the importance and difficulty of change dependence-aware regression testing.