A scalable, commodity data center network architecture
Proceedings of the ACM SIGCOMM 2008 conference on Data communication
Safe and effective fine-grained TCP retransmissions for datacenter communication
Proceedings of the ACM SIGCOMM 2009 conference on Data communication
Proceedings of the ACM SIGCOMM 2010 conference
Stability analysis of QCN: the averaging principle
Proceedings of the ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
Improving datacenter performance and robustness with multipath TCP
Proceedings of the ACM SIGCOMM 2011 conference
DeTail: reducing the flow completion time tail in datacenter networks
Proceedings of the ACM SIGCOMM 2012 conference on Applications, technologies, architectures, and protocols for computer communication
Hi-index | 0.00 |
Today's data centers must support a range of workloads with different demands. While existing approaches handle routine traffic smoothly, ephemeral but intense hotspots cause excessive packet loss and severely degrade performance. This loss occurs even though the congestion is typically highly localized, with spare buffer capacity available at nearby switches. We argue that switches should share buffer capacity to effectively handle this spot congestion without the latency or monetary hit of deploying large buffers at individual switches. We present detour-induced buffer sharing (DIBS), a mechanism that achieves a near lossless network without requiring additional buffers. Using DIBS, a congested switch detours packets randomly to neighboring switches to avoid dropping the packets. We implement DIBS in hardware, on software routers in a testbed, and in simulation, and we demonstrate that it reduces the 99th percentile of query completion time by 85%, with very little impact on background traffic.