Revisiting TCP congestion control using delay gradients

  • Authors:
  • David A. Hayes;Grenville Armitage

  • Affiliations:
  • Centre for Advanced Internet Architectures, Swinburne University of Technology, Melbourne, Australia;Centre for Advanced Internet Architectures, Swinburne University of Technology, Melbourne, Australia

  • Venue:
  • NETWORKING'11 Proceedings of the 10th international IFIP TC 6 conference on Networking - Volume Part II
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional loss-based TCP congestion control (CC) tends to induce high queuing delays and perform badly across paths containing links that exhibit packet losses unrelated to congestion. Delay-based TCP CC algorithms infer congestion from delay measurements and tend to keep queue lengths low. To date most delay-based CC algorithms do not coexist well with loss-based TCP, and require knowledge of a network path's RTT characteristics to establish delay thresholds indicative of congestion. We propose and implement a delay-gradient CC algorithm (CDG) that no longer requires knowledge of path-specific minimum RTT or delay thresholds. Our FreeBSD implementation is shown to coexist reasonably with loss-based TCP (NewReno) in lightly multiplexed environments, share capacity fairly between instances of itself and NewReno, and exhibits improved tolerance of non-congestion related losses (86% better goodput than NewReno in the presence of 1% packet losses).