Buffer overflows of merging streams

  • Authors:
  • Alexander Kesselman;Yishau Mansour;Zvi Lotker;Boaz Patt-Shamir

  • Affiliations:
  • Tel Aviv University, Tel Aviv, Israel;Tel Aviv University, Tel Aviv, Israel;Tel Aviv University, Tel Aviv, Israel;Hewlett Packard, Cambridge, MA

  • Venue:
  • Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Consider an Internet service provider (ISP), or a corporate intranet, that connects a large number of users with the Internet backbone using an "uplink." Within such a system, consider the traffic oriented towards the uplink, namely the streams whose start points are the local users and whose destination is outside the local domain. These streams are merged by a network that consists of merge nodes, typically arranged in a tree topology whose root is directly connected to the uplink. Without loss of generality, we may assume that the bandwidth of the link emanating from a merge node is less than the sum of bandwidths of incoming links (otherwise, we can assume that the incoming links are connected directly to the next node up). Hence, when all users inject data at maximum local speed, packets will eventually be discarded. A very effective way to mitigate some of the losses due to temporary overloads is to equip the merge nodes with buffers, that can absorb transient bursts by storing incoming packets while the outgoing link is busy. The merge nodes are controlled by local on-line buffer management algorithms whose job is to decide which packets to forward and which to drop so as to minimize the damage in case of an overflow.