Modeling workloads and devices for IO load balancing in virtualized environments
ACM SIGMETRICS Performance Evaluation Review
BASIL: automated IO load balancing across storage devices
FAST'10 Proceedings of the 8th USENIX conference on File and storage technologies
Supervised learning based power management for multicore processors
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
IO performance prediction in consolidated virtualized environments
Proceedings of the 2nd ACM/SPEC International Conference on Performance engineering
Revisiting the storage stack in virtualized NAS environments
WIOV'11 Proceedings of the 3rd conference on I/O virtualization
vIC: interrupt coalescing for virtual machine storage device IO
USENIXATC'11 Proceedings of the 2011 USENIX conference on USENIX annual technical conference
Workload analysis of a large-scale key-value store
Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE joint international conference on Measurement and Modeling of Computer Systems
Storage and performance optimization of long tail key access in a social network
Proceedings of the 3rd International Workshop on Cloud Data and Platforms
Performance models of storage contention in cloud environments
Software and Systems Modeling (SoSyM)
Virtual machine workloads: the case for new benchmarks for NAS
FAST'13 Proceedings of the 11th USENIX conference on File and Storage Technologies
Hi-index | 0.00 |
Collection of detailed characteristics of disk I/O for workloads is the first step in tuning disk subsystem performance. This paper presents an efficient implementation of disk I/O workload characterization using online histograms in a virtual machine hypervisor-VMware ESX Server. This technique allows transparent and online collection of essential workload characteristics for arbitrary, unmodified operating system instances running in virtual machines. For analysis that cannot be done efficiently online, we provide a virtual SCSI command tracing framework. Our online histograms encompass essential disk I/O performance metrics including I/O block size, latency, spatial locality, I/O interarrival period and active queue depth. We demonstrate our technique on workloads of Filebench, DBT-2 and large file copy running in virtual machines and provide an analysis of the differences between ZFS and UFS filesystems on Solaris. We show that our implementation introduces negligible overheads in CPU, memory and latency and yet is able to capture essential workload characteristics.