Parallelism versus Memory Allocation in Pipelined Router Forwarding Engines

  • Authors:
  • Fan Chung;Ronald Graham;Jia Mao;George Varghese

  • Affiliations:
  • Department of Mathematics, University of California, San Diego, La Jolla, CA 92093, USA;Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92093, USA;Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92093, USA;Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92093, USA

  • Venue:
  • Theory of Computing Systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

A crucial problem that needs to be solved is the allocation of memory to processors in a pipeline. Ideally, the processor memories should be totally separate (i.e., one-port memories) in order to minimize contention; however, this minimizes memory sharing. Idealized sharing occurs by using a single shared memory for all processors but this maximizes contention. Instead, in this paper we show that perfect memory sharing of shared memory can be achieved with a collection of two-port memories, as long as the number of processors is less than the number of memories. We show that the problem of allocation is NP-complete in general, but has a fast approximation algorithm that comes within a factor of $\frac 32$ asymptotically. The proof utilizes a new bin packing model, which is interesting in its own right. Further, for important special cases that arise in practice a more sophisticated modification of this approximation algorithm is in fact optimal. We also discuss the online memory allocation problem and present fast online algorithms that provide good memory utilization while allowing fast updates.