Analyzing Regression Test Selection Techniques
IEEE Transactions on Software Engineering
Contextual Petri nets, asymmetric event structures, and processes
Information and Computation
Software Engineering: Theory and Practice
Software Engineering: Theory and Practice
The Book of Traces
Model Based Regression Test Reduction Using Dependence Analysis
ICSM '02 Proceedings of the International Conference on Software Maintenance (ICSM'02)
VLHCC '05 Proceedings of the 2005 IEEE Symposium on Visual Languages and Human-Centric Computing
Fundamentals of Algebraic Graph Transformation (Monographs in Theoretical Computer Science. An EATCS Series)
Model-based regression test suite generation using dependence analysis
Proceedings of the 3rd international workshop on Advances in model-based testing
Towards Automatic Regression Test Selection for Web Services
COMPSAC '07 Proceedings of the 31st Annual International Computer Software and Applications Conference - Volume 02
Reliability of the Path Analysis Testing Strategy
IEEE Transactions on Software Engineering
Automating regression test selection based on UML designs
Information and Software Technology
Graph Transformation in a Nutshell
Electronic Notes in Theoretical Computer Science (ENTCS)
Journal of Systems and Software
Hi-index | 0.01 |
Regression testing verifies if systems under evolution retain their existing functionality. Based on large test sets accumulated over time, this is a costly process, especially if testing is manual or the system to be tested is remote or only available for testing during a limited period. Often, changes made to a system are local, arising from fixing bugs or specific additions or changes to the functionality. Rerunning the entire test set in such cases is wasteful. Instead, we would like to be able to identify the parts of the system that were affected by the changes and select only those test cases for rerun which test functionality that could have been affected. This paper proposes a model-based approach to this problem, where service interfaces are described by visual contracts, i.e., pre and post conditions expressed as graph transformation rules. The analysis of conflicts and dependencies between these rules allows us to assess the impact of a change of the signature, contract, or implementation of an operation on other operations, and thus to decide which of the test cases is required for re-execution. Apart from discussing the conceptual foundations and justifications of the approach, we illustrate and evaluate it on a case study of a bug tracking service in several versions.