Solving the TCP-Incast Problem with Application-Level Scheduling

  • Authors:
  • Maxim Podlesny;Carey Williamson

  • Affiliations:
  • -;-

  • Venue:
  • MASCOTS '12 Proceedings of the 2012 IEEE 20th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Data center networks are characterized by high link speeds, low propagation delays, small switch buffers, and temporally clustered arrivals of many concurrent TCP flows fulfilling data transfer requests. However, the combination of these features can lead to transient buffer overflow and bursty packet losses, which in turn lead to TCP retransmission timeouts that degrade the performance of short-lived flows. This so-called TCP-incast problem can cause TCP throughput collapse. In this paper, we explore an application-level approach for solving this problem. The key idea of our solution is to coordinate the scheduling of short-lived TCP flows so that no data loss happens. We develop a mathematical model of lossless data transmission, and estimate the maximum good put achievable in data center networks. The results indicate non-monotonic good put that is highly sensitive to specific parameter configurations in the data center network. We validate our model using ns-2 network simulations, which show good correspondence with the theoretical results.