Counting, enumerating, and sampling of execution plans in a cost-based query optimizer
SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
Analyzing plan diagrams of database query optimizers
VLDB '05 Proceedings of the 31st international conference on Very large data bases
Unit-testing query transformation rules
Proceedings of the 1st international workshop on Testing database systems
A framework for testing query transformation rules
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
Query processing in Oracle DBMS
DOLAP '10 Proceedings of the ACM 13th international workshop on Data warehousing and OLAP
CODD: constructing dataless databases
DBTest '12 Proceedings of the Fifth International Workshop on Testing Database Systems
Testing the accuracy of query optimizers
DBTest '12 Proceedings of the Fifth International Workshop on Testing Database Systems
Hi-index | 0.00 |
Plan regressions pose a significant problem in commercial database systems: Seemingly innocuous changes to a query optimizer component such as the cost model or the search strategy in order to enhance optimization results may result in unexpected and detrimental changes to previously satisfactory query plans. Database vendors spend substantial resources on quality assurance to guard against this very issue, yet, testing for plan regressions in optimizers has proven hard and inconclusive. This is due to the nature of the problem: the optimizer chooses a single plan---Best Plan Found (bpf)---from a search space of literally up to hundreds of millions of different plan alternatives. It is standard practice to use a known good bpf and test for changes to this plan, i. e., ensure that no changes have occurred. However, in the vast majority of cases the bpf is not be affected by a code-level change, even though the change is known to affect many plans in the search space. In this paper, we propose a holistic approach to address this issue. Instead of focusing on test suites consisting of BPFS we take the entire search space into account. We introduce a metric to assess the optimizer's accuracy across the entire search space. We present preliminary results using a commercial database system, demonstrate the usefulness of our methodology with a standard benchmark, and illustrate how to build such an early warning system.