Query evaluation techniques for large databases
ACM Computing Surveys (CSUR)
The chaining approach for software test data generation
ACM Transactions on Software Engineering and Methodology (TOSEM)
Counting, enumerating, and sampling of execution plans in a cost-based query optimizer
SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
Testing object-oriented systems: models, patterns, and tools
Testing object-oriented systems: models, patterns, and tools
Annals of Software Engineering
Volcano An Extensible and Parallel Query Evaluation System
IEEE Transactions on Knowledge and Data Engineering
Massive Stochastic Testing of SQL
VLDB '98 Proceedings of the 24rd International Conference on Very Large Data Bases
Queue - Quality Assurance
DART: directed automated random testing
Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation
When only random testing will do
Proceedings of the 1st international workshop on Random testing
Object distance and its application to adaptive random testing of object-oriented programs
Proceedings of the 1st international workshop on Random testing
Is adaptive random testing really better than random testing
Proceedings of the 1st international workshop on Random testing
Compositional dynamic test generation
Proceedings of the 34th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Feedback-Directed Random Test Generation
ICSE '07 Proceedings of the 29th international conference on Software Engineering
Generating targeted queries for database testing
Proceedings of the 2008 ACM SIGMOD international conference on Management of data
Unit-testing query transformation rules
Proceedings of the 1st international workshop on Testing database systems
Focused iterative testing: a test automation case study
Proceedings of the 1st international workshop on Testing database systems
A framework for testing query transformation rules
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
Validating the Oracle SQL engine
Proceedings of the Second International Workshop on Testing Database Systems
Adaptive Random Testing: The ART of test case diversity
Journal of Systems and Software
Using the optimizer to generate an effective regression suite: a first step
Proceedings of the Third International Workshop on Testing Database Systems
MODA: automated test generation for database applications via mock objects
Proceedings of the IEEE/ACM international conference on Automated software engineering
True language-level SQL debugging
Proceedings of the 14th International Conference on Extending Database Technology
Targeted genetic test SQL generation for the DB2 database
DBTest '12 Proceedings of the Fifth International Workshop on Testing Database Systems
Testing cardinality estimation models in SQL server
DBTest '12 Proceedings of the Fifth International Workshop on Testing Database Systems
Observing SQL queries in their natural habitat
ACM Transactions on Database Systems (TODS)
Mutatis mutandis: evaluating DBMS test adequacy with mutation testing
Proceedings of the Sixth International Workshop on Testing Database Systems
Proceedings of the 28th Annual ACM Symposium on Applied Computing
Hi-index | 0.00 |
Testing a database engine has been and continues to be a challenging task. The space of possible SQL queries along with their possible access paths is practically unbounded. Moreover, this space is continuously increasing in size as the feature set of modern DBMS systems expands with every product release. To tackle these problems, random query generator tools have been used to create large numbers of test cases. While such test case generators enable the creation of complex and syntactically correct SQL queries, they do not guarantee that the queries produced return results or exercise desired DBMS components. Very often the generated queries contain logical contradictions, which cause "short-circuits" at the query optimization layer, failing to exercise the lower layers of the database engine (query optimization, query execution, access methods, etc.) In this paper we present a random test case generation technique, which provides solutions to the above problems. Our technique utilizes execution feedback, obtained from the DBMS under test, in order to guide the test generation process toward specific DBMS subcomponents and rarely exercised code paths. Test cases are created incrementally using a genetic approach, which synthesizes query characteristics that are of interest for the purposes of test coverage. Our experiments indicate that our technique can outperform other methods of random testing in terms of efficiency and code coverage. We also provide experimental results which show that the use of execution feedback improves code coverage of specific DBMS components. Finally, we share our experiences gained from using this testing approach during the development cycles of Microsoft SQL Server.