Small forwarding tables for fast routing lookups
SIGCOMM '97 Proceedings of the ACM SIGCOMM '97 conference on Applications, technologies, architectures, and protocols for computer communication
PATRICIA—Practical Algorithm To Retrieve Information Coded in Alphanumeric
Journal of the ACM (JACM)
Memory-efficient state lookups with fast updates
Proceedings of the conference on Applications, Technologies, Architectures, and Protocols for Computer Communication
IP route lookups as string matching
LCN '00 Proceedings of the 25th Annual IEEE Conference on Local Computer Networks
A pipelined memory architecture for high throughput network processors
Proceedings of the 30th annual international symposium on Computer architecture
IP-address lookup using LC-tries
IEEE Journal on Selected Areas in Communications
Scalable IP lookup for Internet routers
IEEE Journal on Selected Areas in Communications
Hi-index | 0.24 |
The significantly increased address length of IPv6 (128-bit) provides an endless pool of address space. However, it also poses a great challenge on wire-speed route lookup for high-end routing devices, because of the explosive growth of both lookup latency and storage requirement. As a result, even today's most efficient IPv4 route lookup schemes can hardly be competent for IPv6. In this paper, we develop a novel IPv6 lookup scheme based on a thorough study of the distributions of real-world route prefixes and associative RFC documents. The proposed scheme combines the bitmap compression with path compression, and employs a variable-stride mechanism to maximize the compress ratio and minimize average memory reference. A possible implementation using mixed CAM devices is also suggested to further reduce the memory consumption and lookup steps. The experimental results show that for an IPv6 route table containing over 130K prefixes, our scheme can perform 22 million lookups per second even in the worst case with only 440Kbytes SRAM and no more than 3Kbytes TCAM. This means that it can support 10Gbps wire-speed forwarding for back-to-back 40-byte packets using on-chip memories or caches. What's more, incremental updates and high scalability is also achieved.