High throughput and large capacity pipelined dynamic search tree on FPGA
Proceedings of the 18th annual ACM/SIGDA international symposium on Field programmable gate arrays
Memory-efficient and scalable virtual routers using FPGA
Proceedings of the 19th ACM/SIGDA international symposium on Field programmable gate arrays
Characterization of power-aware reconfiguration in FPGA-based networking hardware
NETWORKING'11 Proceedings of the IFIP TC 6th international conference on Networking
Scalable architecture for 135 GBPS IPv6 lookup on FPGA (abstract only)
Proceedings of the ACM/SIGDA international symposium on Field Programmable Gate Arrays
FlashTrie: beyond 100-Gb/s IP route lookup using hash-based prefix-compressed trie
IEEE/ACM Transactions on Networking (TON)
Proceedings of the ACM/SIGDA international symposium on Field programmable gate arrays
An architecture for IPv6 lookup using parallel index generation units
ARC'13 Proceedings of the 9th international conference on Reconfigurable Computing: architectures, tools, and applications
Hi-index | 0.00 |
Most high-speed Internet Protocol (IP) lookup implementations use tree traversal and pipelining. Due to the available on-chip memory and the number of I/O pins ofField Programmable Gate Arrays (FPGAs), state-of-the-artdesigns cannot support the current largest routing table(consisting of 257K prefixes in backbone routers). We propose a novel scalable high-throughput, low-power SRAM-based linear pipeline architecture for IP lookup. Using asingle FPGA, the proposed architecture can support thecurrent largest routing table, or even larger tables of upto 400K prefixes. Our architecture can also be easily partitioned, so as to use external SRAM to handle even larger routing tables (up to 1.7M prefixes). Our implementation shows a high throughput (340 mega lookups per second or 109 Gbps), even when external SRAM is used. The use of SRAM (instead of TCAM) leads to an order of magnitude reduction in power dissipation. Additionally, the architecture supports power saving by allowing only a portion of the memory to be active on each memory access. Our design also maintains packet input order and supports in-place non-blocking route updates.