Reducing memory latency via non-blocking and prefetching caches
ASPLOS V Proceedings of the fifth international conference on Architectural support for programming languages and operating systems
IEEE/ACM Transactions on Networking (TON)
Fast Updating Algorithms for TCAMs
IEEE Micro
Sorting and Searching Using Ternary CAMs
HOTI '02 Proceedings of the 10th Symposium on High Performance Interconnects HOT Interconnects
Execution-Driven Simulation of IP Router Architectures
NCA '01 Proceedings of the IEEE International Symposium on Network Computing and Applications (NCA'01)
Communications of the ACM - Interaction design and children
Load balancing for parallel forwarding
IEEE/ACM Transactions on Networking (TON)
Ripple-Precharge TCAM A Low-Power Solution for Network Search Engines
ICCD '05 Proceedings of the 2005 International Conference on Computer Design
Efficient prefix cache for network processors
HOTI '04 Proceedings of the High Performance Interconnects, 2004. on Proceedings. 12th Annual IEEE Symposium
A CAM with mixed serial-parallel comparison for use in low energy caches
IEEE Transactions on Very Large Scale Integration (VLSI) Systems - Special section on the 2002 international symposium on low-power electronics and design (ISLPED)
A multizone pipelined cache for IP routing
NETWORKING'05 Proceedings of the 4th IFIP-TC6 international conference on Networking Technologies, Services, and Protocols; Performance of Computer and Communication Networks; Mobile and Wireless Communication Systems
Survey and taxonomy of IP address lookup algorithms
IEEE Network: The Magazine of Global Internetworking
Two-level cache architecture to reduce memory accesses for IP lookups
Proceedings of the 23rd International Teletraffic Congress
Hi-index | 0.00 |
This paper proposes a novel Internet Protocol (IP) packet forwarding architecture for IP routers. This architecture is comprised of a non-blocking Multizone Pipelined Cache (MPC) and of a hardware-supported IP routing lookup method. The paper also describes a method for expansion-free software lookups. The MPC achieves lower miss rates than those reported in the literature. The MPC uses a two-stage pipeline for a half-prefix/half-full address IP cache that results in lower activity than conventional caches. MPC's updating technique allows the IP routing lookup mechanism to freely decide when and how to issue update requests. The effective miss penalty of the MPC is reduced by using a small non-blocking buffer. This design caches prefixes but requires significantly less expansion of the routing table than conventional prefix caches. The hardware-based IP lookup mechanism uses a Ternary Content Addressable Memory (TCAM) with a novel Hardware-based Longest Prefix Matching (HLPM) method. HLPM has lower signaling activity in order to process short matching prefixes as compared to alternative designs. HLPM has a simple solution to determine the longest matching prefix and requires a single write for table updates.