PlanetLab: an overlay testbed for broad-coverage services
ACM SIGCOMM Computer Communication Review
DiPerF: An Automated DIstributed PERformance Testing Framework
GRID '04 Proceedings of the 5th IEEE/ACM International Workshop on Grid Computing
Virtual Appliances for Deploying and Maintaining Software
LISA '03 Proceedings of the 17th USENIX conference on System administration
Automating experimentation on distributed testbeds
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering
The NMI build & test laboratory: continuous integration framework for distributed computing software
LISA '06 Proceedings of the 20th conference on Large Installation System Administration
Handling flash crowds from your garage
ATC'08 USENIX 2008 Annual Technical Conference on Annual Technical Conference
VATS: Virtualized-Aware Automated Test Service
QEST '08 Proceedings of the 2008 Fifth International Conference on Quantitative Evaluation of Systems
Cassandra: structured storage system on a P2P network
Proceedings of the 28th ACM symposium on Principles of distributed computing
Agile Web Development with Rails, Third Edition
Agile Web Development with Rails, Third Edition
How is the weather tomorrow?: towards a benchmark for the cloud
Proceedings of the Second International Workshop on Testing Database Systems
Benchmarking cloud serving systems with YCSB
Proceedings of the 1st ACM symposium on Cloud computing
Characterizing, modeling, and generating workload spikes for stateful services
Proceedings of the 1st ACM symposium on Cloud computing
On the Performance Variability of Production Cloud Services
CCGRID '11 Proceedings of the 2011 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing
YCSB++: benchmarking and performance debugging advanced features in scalable table stores
Proceedings of the 2nd ACM Symposium on Cloud Computing
Empirical evaluation of cloud-based testing techniques: a systematic review
ACM SIGSOFT Software Engineering Notes
A Runtime Quality Measurement Framework for Cloud Database Service Systems
QUATIC '12 Proceedings of the 2012 Eighth International Conference on the Quality of Information and Communications Technology
Hi-index | 0.00 |
Creating system setups for controlled performance evaluation experiments of distributed systems is time-consuming and expensive. Re-creating experiment setups and reproducing experimental results that have been published by other researchers is even more challenging. In this paper, we present an experiment automation approach for evaluating distributed systems in compute cloud environments. We propose three concepts which should guide the design of experiment automation tools: (1) capture experiment plans in software modules, (2) run experiments in a publicly accessible cloud-based Elastic Lab, and (3) collaborate on experiments in an open, distributed collaboration system. We developed two tools which implement these basic concepts and discuss challenges and lessons learned during our implementation. An initial exemplary use case with Apache Cassandra on top of Amazon EC2 provides a first insight into the types of performance and scalability experiments enabled by our tools.