Reducing the impact of false time out on TCP performance in TCP over OBS networks

  • Authors:
  • N. Sreenath;N. Srinath;J. Aloysius Suren;K. D. Kumar

  • Affiliations:
  • Department of Computer Science and Information Technology, Pondicherry Engineering College, Pondicherry, India;Symantec Software and Services India Pvt. Ltd, Chennai, India;Amazon Development Center India Pvt. Ltd., Chennai, India;Sameva Software and Services Pvt. Ltd., Hyderabad, India

  • Venue:
  • Photonic Network Communications
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Random burst contention losses plague the performance of Optical Burst Switched networks. Such random losses occur even in low load network condition due to the analogous behavior of wavelength and routing algorithms. Since a burst may carry many packets from many TCP sources, its loss can trick the TCP sources to conclude/infer that the underlying (optical) network is congested. Accordingly, TCP reduces sending rate and switches over to either fast retransmission or slow start state. This reaction by TCP is uncalled-for in TCP over OBS networks as the optical network may not be congested during such random burst contention losses. Hence, these losses are to be addressed in order to improve the performance of TCP over OBS networks. Existing work in the literature achieves the above laid objective at the cost of violating the semantics of OBS and/or TCP. Several other works make delay inducing assumptions. In our work, we introduce a new layer, called Adaptation Layer, in between TCP and OBS layers. This layer uses burst retransmission to mitigate the effect of burst loss due to contention on TCP by leveraging the difference between round trip times of TCP and OBS. We achieve our objective with the added advantage of maintaining the semantics of the layers intact.