The case for persistent-connection HTTP
SIGCOMM '95 Proceedings of the conference on Applications, technologies, architectures, and protocols for computer communication
Improving end-to-end performance of the Web using server volumes and proxy filters
Proceedings of the ACM SIGCOMM '98 conference on Applications, technologies, architectures, and protocols for computer communication
ACM SIGCOMM Computer Communication Review
An evaluation of TCP splice benefits in web proxy servers
Proceedings of the 11th international conference on World Wide Web
Answering what-if deployment and configuration questions with wise
Proceedings of the ACM SIGCOMM 2008 conference on Data communication
Moving beyond end-to-end path information to optimize CDN performance
Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference
A hybrid FEC-ARQ protocol for low-delay lossless sequential data streaming
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
A DNS reflection method for global traffic management
USENIXATC'10 Proceedings of the 2010 USENIX conference on USENIX annual technical conference
Characterizing roles of front-end servers in end-to-end performance of dynamic content distribution
Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference
A practical solution to the client-LDNS mismatch problem
ACM SIGCOMM Computer Communication Review
Mapping the expansion of Google's serving infrastructure
Proceedings of the 2013 conference on Internet measurement conference
Hi-index | 0.00 |
In this paper, we examine the benefits of split-TCP proxies, deployed in an operational world-wide network, for accelerating cloud services. We consider a fraction of a network consisting of a large number of satellite datacenters, which host split-TCP proxies, and a smaller number of mega datacenters, which ultimately perform computation or provide storage. Using web search as an exemplary case study, our detailed measurements reveal that a vanilla TCP splitting solution deployed at the satellite DCs reduces the 95th percentile of latency by as much as 43% when compared to serving queries directly from the mega DCs. Through careful dissection of the measurement results, we characterize how individual components, including proxy stacks, network protocols, packet losses and network load, can impact the latency. Finally, we shed light on further optimizations that can fully realize the potential of the TCP splitting solution.