An intensional approach to the specification of test cases for database applications
Proceedings of the 28th international conference on Software engineering
Information and Software Technology
Empirical evaluations of regression test selection techniques: a systematic review
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Query-aware shrinking test databases
Proceedings of the Second International Workshop on Testing Database Systems
White-box testing for database-driven applications: a requirements analysis
Proceedings of the Second International Workshop on Testing Database Systems
A systematic review on regression test selection techniques
Information and Software Technology
A lightweight framework for testing database applications
Proceedings of the 2010 ACM Symposium on Applied Computing
Using the optimizer to generate an effective regression suite: a first step
Proceedings of the Third International Workshop on Testing Database Systems
ETLDiff: a semi-automatic framework for regression test of ETL software
DaWaK'06 Proceedings of the 8th international conference on Data Warehousing and Knowledge Discovery
Diagnosing faults in embedded queries in database applications
Proceedings of the 2012 Joint EDBT/ICDT Workshops
Efficient regression testing of ontology-driven systems
Proceedings of the 2012 International Symposium on Software Testing and Analysis
Exploration and analysis of regression test suite optimization
ACM SIGSOFT Software Engineering Notes
Hi-index | 0.00 |
Regression testing is a widely-used method for checking whether modifications to software systems have adversely affected the overall functionality. This is potentially an expensive process, since test suites can be large and time-consuming to execute. The overall costs can be reduced if tests that cannot possibly be affected by the modifications are ignored. Various techniques for selecting subsets of tests for re-execution have been proposed, as well as methods for proving that particular test selection criteria do not omit relevant tests. However, current selection techniques are focussed on identifying the impact of modifications on program state. They assume that the only factor that can change the result of a test case is the set of input values given for it, while all other influences on the behaviour of the program (such as external interrupts or hardware faults) will be constant for each re-execution of the test. This assumption is impractical in the case of an important class of software system, i.e. systems which make use of an external persistent state, such as a database management system, to share information between application invocations. If applied naively to such systems, existing regression test selection algorithms will omit certain test cases which could in fact be affected by the modifications to the code. In this paper, we show why this is the case, and propose a new definition of safety for regression test selection that takes into account the interactions of the program with a database state. We also present an algorithm and associated tool that safely performs test selection for database-drivenapplications, and (since efficiency is an important concern for test selection algorithms) we propose a variant that defines safety in terms of database state alone. This latter form of safety allows more efficient regression testing to be performed for applications in which program state is used only as a temporary holding space for data from the database. The claims of increased efficiency of both forms of safety are supported by the results of an empirical comparison with existing techniques.