Parallel execution of test runs for database application systems
VLDB '05 Proceedings of the 31st international conference on Very large data bases
Proceedings of the 2008 ACM SIGMOD international conference on Management of data
Automatic exploration of datacenter performance regimes
ACDC '09 Proceedings of the 1st workshop on Automated control for datacenters and clouds
Exact cardinality query optimization for optimizer testing
Proceedings of the VLDB Endowment
Tuning database configuration parameters with iTuned
Proceedings of the VLDB Endowment
Schema-driven experiment management: declarative testing with dexterity
Proceedings of the Third International Workshop on Testing Database Systems
JustRunIt: experiment-based management of virtualized data centers
USENIX'09 Proceedings of the 2009 conference on USENIX Annual technical conference
Mesos: a platform for fine-grained resource sharing in the data center
Proceedings of the 8th USENIX conference on Networked systems design and implementation
Warding off the dangers of data corruption with amulet
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Zephyr: live migration in shared nothing databases for elastic cloud platforms
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Hi-index | 0.00 |
The need to perform testing and tuning of database instances with production-like workloads (W), configurations (C), data (D), and resources (R) arises routinely. The further W, C, D, and R used in testing and tuning deviate from what is observed on the production database instance, the lower is the trustworthiness of the testing and tuning tasks done. For example, it is common to hear about performance degradation observed after the production database is upgraded from one software version to another. A typical cause of this problem is that the W, C, D, or R used during upgrade testing differed in some way from that on the production database. Performing testing and tuning tasks in principled and automated ways is very important, especially since---spurred by innovations in cloud computing---the number of database instances that a database administrator (DBA) has to manage is growing rapidly. We present Flex, a platform for trustworthy testing and tuning of production database instances. Flex gives DBAs a high-level language, called Slang, to specify definitions and objectives regarding running experiments for testing and tuning. Flex's orchestrator schedules and runs these experiments in an automated manner that meets the DBA-specified objectives. Flex has been fully prototyped. We present results from a comprehensive empirical evaluation that reveals the effectiveness of Flex on diverse problems such as upgrade testing, near-real-time testing to detect corruption of data, and server configuration tuning. We also report on our experiences taking some of the testing and tuning software described in the literature and porting them to run on the Flex platform.