Parallelism versus memory allocation in pipelined router forwarding engines

  • Authors:
  • Fan Chung;Ronald Graham;George Varghese

  • Affiliations:
  • University of California, San Diego;University of California, San Diego;University of California, San Diego

  • Venue:
  • Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

A crucial problem that needs to be solved is the allocation of memory to processors in a pipeline. Ideally, the processor memories should be totally separate (i.e., one port memories) in order to minimize contention; however, this minimizes memory sharing. Idealized sharing occurs by using a single shared memory for all processors but this maximizes contention. Instead, in this paper we show that perfect memory sharing of shared memory can be achieved with a collection of *two*-port memories, as long as the number of processors is less than the number of memories. We show that the problem of allocation is NP-complete in general, but has a fast approximation algorithm that comes within a factor of 3/2. The proof utilizes a new bin packing model, which is interesting in its own right. Further, for important special cases that arise in practice the approximation algorithm is indeed optimal. We also describe an incremental memory allocation algorithm that provides good memory utilization while allowing fast updates.