ACM Transactions on Database Systems (TODS)
Property-Based Software Engineering Measurement
IEEE Transactions on Software Engineering
SIGMOD '96 Proceedings of the 1996 ACM SIGMOD international conference on Management of data
Query Optimization in a Heterogeneous DBMS
VLDB '92 Proceedings of the 18th International Conference on Very Large Data Bases
Calibrating the Query Optimizer Cost Model of IRO-DB, an Object-Oriented Federated Database System
VLDB '96 Proceedings of the 22th International Conference on Very Large Data Bases
Processing Queries Over Generalization Hierarchies in a Multidatabase System
VLDB '83 Proceedings of the 9th International Conference on Very Large Data Bases
Optimizing ETL Processes in Data Warehouses
ICDE '05 Proceedings of the 21st International Conference on Data Engineering
Scientific workflow management and the Kepler system: Research Articles
Concurrency and Computation: Practice & Experience - Workflow in Grid Systems
Dryad: distributed data-parallel programs from sequential building blocks
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007
Pig latin: a not-so-foreign language for data processing
Proceedings of the 2008 ACM SIGMOD international conference on Management of data
PQR: Predicting Query Execution Times for Autonomous Workload Management
ICAC '08 Proceedings of the 2008 International Conference on Autonomic Computing
Parallelizing query optimization
Proceedings of the VLDB Endowment
QoX-driven ETL design: reducing the cost of ETL consulting engagements
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
Experiment Line: Software Reuse in Scientific Workflows
SSDBM 2009 Proceedings of the 21st International Conference on Scientific and Statistical Database Management
HadoopDB: an architectural hybrid of MapReduce and DBMS technologies for analytical workloads
Proceedings of the VLDB Endowment
Nephele/PACTs: a programming model and execution framework for web-scale analytical processing
Proceedings of the 1st ACM symposium on Cloud computing
The performance of MapReduce: an in-depth study
Proceedings of the VLDB Endowment
Hadoop++: making a yellow elephant run like a cheetah (without it even noticing)
Proceedings of the VLDB Endowment
CIEL: a universal execution engine for distributed data-flow computing
Proceedings of the 8th USENIX conference on Networked systems design and implementation
Schedule optimization for data processing flows on the cloud
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
ARIA: automatic resource inference and allocation for mapreduce environments
Proceedings of the 8th ACM international conference on Autonomic computing
Blueprints and measures for ETL workflows
ER'05 Proceedings of the 24th international conference on Conceptual Modeling
Resource provisioning framework for mapreduce jobs with performance goals
Middleware'11 Proceedings of the 12th ACM/IFIP/USENIX international conference on Middleware
Optimizing analytic data flows for multiple execution engines
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
Composing multiple variability artifacts to assemble coherent workflows
Software Quality Control
Hi-index | 0.00 |
To remain competitive, enterprises are evolving in order to quickly respond to changing market conditions and customer needs. In this new environment, a single centralized data warehouse is no longer sufficient. Next generation business intelligence involves data flows that span multiple, diverse processing engines, that contain complex functionality like data/text analytics, machine learning operations, and that need to be optimized against various objectives. A common example is the use of Hadoop to analyze unstructured text and merging these results with relational database queries over the data warehouse. We refer to these multi-engine analytic data flows as hybrid flows. Currently, it is a cumbersome task to create and run hybrid flows. Custom scripts must be written to dispatch tasks to the individual processing engines and to exchange intermediate results. So, designing correct hybrid flows is a challenging task. Optimizing such flows is even harder. Additionally, when the underlying computing infrastructure changes, existing flows likely need modification and reoptimization. The current, ad-hoc design approach cannot scale as hybrid flows become more commonplace. To address this challenge, we are building a platform to design and manage hybrid flows. It supports the logical design of hybrid flows in which implementation details are not exposed. It generates code for the underlying processing engines and orchestrates their execution. But the key enabling technology in the platform is an optimizer that converts the logical flow to an executable form that is optimized for the underlying infrastructure according to user-specified objectives. In this paper, we describe challenges in designing the optimizer and our solutions. We illustrate the optimizer through a real-world use case. We present a logical design and optimized designs for the use case. We show how the performance of the use case varies depending on the system configuration and how the optimizer is able to generate different optimized flows for different configurations.