OpenFlow: enabling innovation in campus networks
ACM SIGCOMM Computer Communication Review
Autonomic Live Adaptation of Virtual Computational Environments in a Multi-Domain Infrastructure
ICAC '06 Proceedings of the 2006 IEEE International Conference on Autonomic Computing
PortLand: a scalable fault-tolerant layer 2 data center network fabric
Proceedings of the ACM SIGCOMM 2009 conference on Data communication
VL2: a scalable and flexible data center network
Proceedings of the ACM SIGCOMM 2009 conference on Data communication
ElasticTree: saving energy in data center networks
NSDI'10 Proceedings of the 7th USENIX conference on Networked systems design and implementation
Hedera: dynamic flow scheduling for data center networks
NSDI'10 Proceedings of the 7th USENIX conference on Networked systems design and implementation
Black-box and gray-box strategies for virtual machine migration
NSDI'07 Proceedings of the 4th USENIX conference on Networked systems design & implementation
Virtual machines placement for network isolation in clouds
Proceedings of the 2012 ACM Research in Applied Computation Symposium
The beacon openflow controller
Proceedings of the second ACM SIGCOMM workshop on Hot topics in software defined networking
Hi-index | 0.00 |
Many data centers extensively use virtual machines (VMs), which provide the flexibility to move workload among physical servers. VMs can be placed to maximize application performance, power efficiency, or even fault tolerance. However, VMs are typically repositioned without considering network topology, congestion, or traffic routes. In this demo, we show a system, Virtue, which enables the comparison of different algorithms for VM placement and network routing at the scale of an entire data center. Our goal is to understand how placement and routing affect overall application performance by varying the types and mix of workloads, network topologies, and compute resources; these parameters will be available for demo attendees to explore.