Memory-efficient state lookups with fast updates

  • Authors:
  • Sandeep Sikka;George Varghese

  • Affiliations:
  • Washington University;UCSD

  • Venue:
  • Proceedings of the conference on Applications, Technologies, Architectures, and Protocols for Computer Communication
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Routers must do a best matching prefix lookup for every packet; solutions for Gigabit speeds are well known. As Internet link speeds higher, we seek a scalable solution whose speed scales with memory speeds while allowing large prefix databases. In this paper we show that providing such a solution requires careful attention to memory allocation and pipelining. This is because fast lookups require on-chip or off-chip SRAM which is limited by either expense or manufacturing process. We show that doing so while providing guarantees on the number of prefixes supported requires new algorithms and the breaking down of traditional abstraction boundaries between hardware and software. We introduce new problem-specific memory allocators that have provable memory utilization guarantees that can reach 100%; this is contrast to all standard allocators that can only guarantee 20% utilization when the requests can come in the range [1 ... 32]. An optimal version of our algorithm requires a new (but feasible) SRAM memory design that allows shifted access in addition to normal word access. Our techniques generalize to other IP lookup schemes and to other state lookups besides prefix lookup.