The performance analysis of linux networking - Packet receiving

  • Authors:
  • Wenji Wu;Matt Crawford;Mark Bowden

  • Affiliations:
  • Fermilab, MS-368, P.O. Box 500, Batavia, IL 60510, USA;Fermilab, MS-368, P.O. Box 500, Batavia, IL 60510, USA;Fermilab, MS-368, P.O. Box 500, Batavia, IL 60510, USA

  • Venue:
  • Computer Communications
  • Year:
  • 2007

Quantified Score

Hi-index 0.24

Visualization

Abstract

The computing models for high-energy physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets - now in the multi-petabyte (10^1^5 bytes) range and expected to grow to exabytes within a decade - reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems' network performance are analyzed.