Computer Networks and ISDN Systems
A Model of Workloads and its Use in Miss-Rate Prediction for Fully Associative Caches
IEEE Transactions on Computers
Algorithms in C++
Fast address lookups using controlled prefix expansion
ACM Transactions on Computer Systems (TOCS)
Tree bitmap: hardware/software IP lookups with incremental updates
ACM SIGCOMM Computer Communication Review
Packet classification in large ISPs: design and evaluation of decision tree classifiers
SIGMETRICS '05 Proceedings of the 2005 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Efficient prefix cache for network processors
HOTI '04 Proceedings of the High Performance Interconnects, 2004. on Proceedings. 12th Annual IEEE Symposium
Two Methods for the Efficient Analysis of Memory Address Trace Data
IEEE Transactions on Software Engineering
A cache-based internet protocol address lookup architecture
Computer Networks: The International Journal of Computer and Telecommunications Networking
Floodless in seattle: a scalable ethernet architecture for large enterprises
Proceedings of the ACM SIGCOMM 2008 conference on Data communication
Wire-Speed TCAM-Based Architectures for Multimatch Packet Classification
IEEE Transactions on Computers
Revisiting Route Caching: The World Should Be Flat
PAM '09 Proceedings of the 10th International Conference on Passive and Active Network Measurement
Making routers last longer with ViAggre
NSDI'09 Proceedings of the 6th USENIX symposium on Networked systems design and implementation
A multizone pipelined cache for IP routing
NETWORKING'05 Proceedings of the 4th IFIP-TC6 international conference on Networking Technologies, Services, and Protocols; Performance of Computer and Communication Networks; Mobile and Wireless Communication Systems
Scalable multi-QoS IP+ATM switch router architecture
IEEE Communications Magazine
Hi-index | 0.00 |
Longest-prefix matching (LPM) is a key processing function of Internet routers. This is an important step in determining which outbound port to use for a given destination address. The time required to look up the outbound port must be less than the minimum inter-arrival time between packets on a given input port. Lookup times can be reduced by caching address prefixes from previous lookups. However all misses in the prefix cache (PC) will initiate a traversal of the routing table to find the longest matching prefix for the destination address. This table is stored in memory so a traversal requires multiple (perhaps many) memory references. These memory references become an increasingly serious bottleneck as line rates increase. In this paper we present a novel second level of caching that can be used to expedite lookups that miss in the PC. We call this second level a dynamic substride cache (DSC). Extensive experiments using real traffic traces and real routing tables show that the DSC is extremely effective in reducing the number of memory references required by a stream of lookups. We also present analytical models to find the optimal partition of a fixed amount of memory between the PC and DSC.