IDMaps: a global internet host distance estimation service
IEEE/ACM Transactions on Networking (TON)
Studying black holes in the internet with Hubble
NSDI'08 Proceedings of the 5th USENIX Symposium on Networked Systems Design and Implementation
Moving beyond end-to-end path information to optimize CDN performance
Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference
RACS: a case for cloud storage diversity
Proceedings of the 1st ACM symposium on Cloud computing
Cloudward bound: planning for beneficial migration of enterprise applications to the cloud
Proceedings of the ACM SIGCOMM 2010 conference
Optimizing cost and performance in online service provider networks
NSDI'10 Proceedings of the 7th USENIX conference on Networked systems design and implementation
Disaster recovery as a cloud service: economic benefits & deployment challenges
HotCloud'10 Proceedings of the 2nd USENIX conference on Hot topics in cloud computing
CloudCmp: comparing public cloud providers
IMC '10 Proceedings of the 10th ACM SIGCOMM conference on Internet measurement
Overclocking the Yahoo!: CDN for faster web page loads
Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference
To move or not to move: the economics of cloud computing
HotCloud'11 Proceedings of the 3rd USENIX conference on Hot topics in cloud computing
Hi-index | 0.00 |
To minimize user-perceived latencies, webservices are often deployed across multiple geographically distributed data centers. The premise of our work is that webservices deployed across multiple cloud infrastructure services can serve users from more data centers than that possible when using a single cloud service, and hence, offer lower latencies to users. In this paper, we conduct a comprehensive measurement study to understand the potential latency benefits of deploying webservices across three popular cloud infrastructure services - Amazon EC2, Google Compute Engine (GCE), and Microsoft Azure. We estimate that, as compared to deployments on one of these cloud services, users in up to half the IP address prefixes can have their RTTs reduced by over 20% when a webservice is deployed across the three cloud services. When we dig deeper to understand these latency benefits, we make three significant observations. First, when webservices shift from single-cloud to multi-cloud deployments, a significant fraction of prefixes will see latency benefits simply by being served from a different data center in the same location. This is because routing inefficiencies that exist between a prefix and a nearby data center in one cloud service are absent on the path from the prefix to a nearby data center in a different cloud service. Second, despite the latency improvements that a large fraction of prefixes will perceive, users in several locations (e.g., Argentina and Israel) will continue to incur RTTs greater than 100ms even when webservices span three large-scale cloud services (EC2, GCE, and Azure). Finally, we see that harnessing the latency benefits offered by multi-cloud deployments is likely to be challenging in practice; our measurements show that the data center which offers the lowest latency to a prefix often fluctuates between different cloud services, thus necessitating replication of data.