Reducing memory latency via non-blocking and prefetching caches
ASPLOS V Proceedings of the fifth international conference on Architectural support for programming languages and operating systems
Complexity/performance tradeoffs with non-blocking loads
ISCA '94 Proceedings of the 21st annual international symposium on Computer architecture
Lockup-free instruction fetch/prefetch cache organization
ISCA '81 Proceedings of the 8th annual symposium on Computer Architecture
Execution-Driven Simulation of IP Router Architectures
NCA '01 Proceedings of the IEEE International Symposium on Network Computing and Applications (NCA'01)
Efficient prefix cache for network processors
HOTI '04 Proceedings of the High Performance Interconnects, 2004. on Proceedings. 12th Annual IEEE Symposium
A CAM with mixed serial-parallel comparison for use in low energy caches
IEEE Transactions on Very Large Scale Integration (VLSI) Systems - Special section on the 2002 international symposium on low-power electronics and design (ISLPED)
Survey and taxonomy of IP address lookup algorithms
IEEE Network: The Magazine of Global Internetworking
A cache-based internet protocol address lookup architecture
Computer Networks: The International Journal of Computer and Telecommunications Networking
Two-level cache architecture to reduce memory accesses for IP lookups
Proceedings of the 23rd International Teletraffic Congress
Hi-index | 0.00 |
Caching recently referenced IP addresses and their forwarding information is an effective strategy to increase routing lookup speed. This paper proposes a multizone non–blocking pipelined cache for IP routing lookup that achieves lower miss rates compared to previously reported IP caches. The twostage pipeline design provides a half–prefix half-full address cache and reduces the cache power consumption. By adopting a very small non-blocking buffer, the cache reduces the effective miss penalty. This cache design takes advantage of storing prefixes but requires smaller table expansions (up to 50% less) compared with prefix caches. Simulation results on real traffic display lower cache miss rate and up to 30% reduction in power consumption.