Pipelined Execution of Critical Sections Using Software-Controlled Caching in Network Processors

  • Authors:
  • Jinquan Dai;Long Li;Bo Huang

  • Affiliations:
  • Intel China Software Center;Intel China Software Center;Intel China Software Center

  • Venue:
  • Proceedings of the International Symposium on Code Generation and Optimization
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

To keep up with the explosive internet packet processing demands, modern network processors (NPs) employ a highly parallel, multi-threaded and multi-core architecture. In such a parallel paradigm, accesses to the shared variables in the external memory (and the associated memory latency) are contained in the critical sections, so that they can be executed atomically and sequentially by different threads in the network processor. In this paper, we present a novel program transformation that is used in the Intel Auto-partitioning C Compiler for IXP to exploit the inherent finer-grained parallelism of those critical sections, using the software-controlled caching mechanism available in the NPs. Consequently, those critical sections can be executed in a pipelined fashion by different threads, thereby effectively hiding the memory latency and improving the performance of network applications. Experimental results show that the proposed transformation provides impressive speedup (up-to 9.94 and scalability (up-to 80 threads) of the performance for the real-world network application (a l0Gbps Efhernet Core/Metro Router).