Congestion avoidance and control
SIGCOMM '88 Symposium proceedings on Communications architectures and protocols
TCP/IP illustrated (vol. 1): the protocols
TCP/IP illustrated (vol. 1): the protocols
The macroscopic behavior of the TCP congestion avoidance algorithm
ACM SIGCOMM Computer Communication Review
Modeling TCP throughput: a simple model and its empirical validation
Proceedings of the ACM SIGCOMM '98 conference on Applications, technologies, architectures, and protocols for computer communication
High data rate transmission in high resolution radio astronomy: vlbiGRID
Future Generation Computer Systems - iGrid 2002
Multimedia streaming via TCP: an analytic performance study
Proceedings of the joint international conference on Measurement and modeling of computer systems
The Role of ESLEA in the development of eVLBI
Future Generation Computer Systems
Editorial: Special section: Switched lightpaths
Future Generation Computer Systems
VLBI_UDP: An application for transporting VLBI data using the UDP protocol
Future Generation Computer Systems
Hi-index | 0.00 |
We investigate the use of Transmission Control Protocol (TCP) for the transfer of real-time constant bit-rate data and report on the effects of the protocol on the flow of data. TCP is a connection-oriented reliable end-to-end transport protocol which provides bitwise correct data transfer. TCP transmits data according to well-known rules that guarantee reliable delivery and attempt to ensure that the available capacity is shared equitably amongst users. However, this behaviour can result in delayed data transmission and highly variable throughput. We discuss the system requirements and parameters that would be needed for TCP to be successful in moving constant bit-rate real-time data and if TCP were to be used, the implications for applications such as e-VLBI, where we are more concerned with the timely arrival of data than guaranteed delivery. Experiments were conducted using bit-rates of hundreds of Mbit/s over dedicated European Gigabit lightpaths. The results show that for a lossy TCP connection using standard bandwidth-delay sized buffers, the packet arrival times for a constant bit-rate flow diverge from real-time arrival. Using sender-side buffering by increasing the Linux socket buffer sizes, by orders of magnitude in some situations, allows timely arrival of data at the TCP level with only temporary, though possibly lengthy, delays.