The data warehouse toolkit: practical techniques for building dimensional data warehouses
The data warehouse toolkit: practical techniques for building dimensional data warehouses
New TPC benchmarks for decision support and web commerce
ACM SIGMOD Record
TPC-DS, taking decision support benchmarking to the next level
Proceedings of the 2002 ACM SIGMOD international conference on Management of data
Massive Stochastic Testing of SQL
VLDB '98 Proceedings of the 24rd International Conference on Very Large Data Bases
MUDD: a multi-dimensional data generator
WOSP '04 Proceedings of the 4th international workshop on Software and performance
Goals and benchmarks for autonomic configuration recommenders
Proceedings of the 2005 ACM SIGMOD international conference on Management of data
VLDB '06 Proceedings of the 32nd international conference on Very large data bases
Generating Queries with Cardinality Constraints for DBMS Testing
IEEE Transactions on Knowledge and Data Engineering
Controlled SQL query evolution for decision support benchmarks
WOSP '07 Proceedings of the 6th international workshop on Software and performance
QAGen: generating query-aware test databases
Proceedings of the 2007 ACM SIGMOD international conference on Management of data
Why you should run TPC-DS: a workload analysis
VLDB '07 Proceedings of the 33rd international conference on Very large data bases
Privacy Preserving Database Generation for Database Application Testing
Fundamenta Informaticae - Special issue ISMIS'05
Generating targeted queries for database testing
Proceedings of the 2008 ACM SIGMOD international conference on Management of data
Proceedings of the 2008 ACM SIGMOD international conference on Management of data
Query-Aware Test Generation Using a Relational Constraint Solver
ASE '08 Proceedings of the 2008 23rd IEEE/ACM International Conference on Automated Software Engineering
A framework for testing DBMS features
The VLDB Journal — The International Journal on Very Large Data Bases
Using the optimizer to generate an effective regression suite: a first step
Proceedings of the Third International Workshop on Testing Database Systems
Automated SQL query generation for systematic testing of database engines
Proceedings of the IEEE/ACM international conference on Automated software engineering
What to ask to an incomplete semantic web reasoner?
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Privacy Preserving Database Generation for Database Application Testing
Fundamenta Informaticae - Special issue ISMIS'05
Variations of the star schema benchmark to test the effects of data skew on query performance
Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering
Preventing database deadlocks in applications
Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering
REDACT: preventing database deadlocks from application-based transactions
Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering
Hi-index | 0.00 |
The combination of an exponential growth in the amount of data managed by a typical business intelligence system and the increased competitiveness of a global economy has propelled decision support systems (DSS) from the role of exploratory tools employed by a few visionary companies to become a core requirement for a competitive enterprise. That same maturation has often resulted in a selection process that requires an ever more critical system evaluation and selection to be completed in an increasingly short period of time. While there have been some advances in the generation of data sets for system evaluation (see [3]), the quantification of query performance has often relied on models and methodologies that were developed for systems that were more simplistic, less dynamic, and less central to a successful business. In this paper we present QGEN, a flexible, high-level query generator optimized for decision support system evaluation. QGEN is able to generate arbitrary query sets, which conform to a selected statistical profile without requiring that the queries be statically defined or disclosed prior to testing. Its novel design links query syntax with abstracted data distributions, enabling users to parameterize their query workload to match an emerging access pattern or data set modification. This results in query sets that retain comparability for system comparisons while reflecting the inherent dynamism of operational systems, and which provide a broad range of syntactic and semantic coverage, while remaining focused on appropriate commonalities within a particular evaluation process or business segment.