Towards realistic benchmarks for virtual infrastructure resource allocators
Proceedings of the Asia-Pacific Workshop on Systems
Towards realistic benchmarks for virtual infrastructure resource allocators
APSys'12 Proceedings of the Third ACM SIGOPS Asia-Pacific conference on Systems
Scheduling mapreduce jobs in HPC clusters
Euro-Par'12 Proceedings of the 18th international conference on Parallel Processing
Proceedings of the 2013 International Symposium on Software Testing and Analysis
Hi-index | 0.00 |
Designing cloud computing setups is a challenging task. It involves understanding the impact of a plethora of parameters ranging from cluster configuration, partitioning, networking characteristics, and the targeted applications' behavior. The design space, and the scale of the clusters, make it cumbersome and error-prone to test different cluster configurations using real setups. Thus, the community is increasingly relying on simulations and models of cloud setups to infer system behavior and the impact of design choices. The accuracy of the results from such approaches depends on the accuracy and realistic nature of the workload traces employed. Unfortunately, few cloud workload traces are available (in the public domain). In this paper, we present the key steps towards analyzing the traces that have been made public, e.g., from Google, and inferring lessons that can be used to design realistic cloud workloads as well as enable thorough quantitative studies of Hadoop design. Moreover, we leverage the lessons learned from the traces to undertake two case studies: (i) Evaluating Hadoop job schedulers, and (ii) Quantifying the impact of shared storage on Hadoop system performance.