On the characteristics and origins of internet flow rates
Proceedings of the 2002 conference on Applications, technologies, architectures, and protocols for computer communications
Design Tradeoffs for Embedded Network Processors
ARCS '02 Proceedings of the International Conference on Architecture of Computing Systems: Trends in Network and Pervasive Computing
Improving route lookup performance using network processor cache
Proceedings of the 2002 ACM/IEEE conference on Supercomputing
Memory Hierarchy Design for a Multiprocessor Look-up Engine
Proceedings of the 12th International Conference on Parallel Architectures and Compilation Techniques
IBM PowerNP network processor: Hardware, software, and applications
IBM Journal of Research and Development
Overcoming the memory wall in packet processing: hammers or ladders?
Proceedings of the 2005 ACM symposium on Architecture for networking and communications systems
Survey and taxonomy of packet classification techniques
ACM Computing Surveys (CSUR)
Metrics for Mass-Count Disparity
MASCOTS '06 Proceedings of the 14th IEEE International Symposium on Modeling, Analysis, and Simulation
Adaptive insertion policies for high performance caching
Proceedings of the 34th annual international symposium on Computer architecture
L1 Cache Filtering Through Random Selection of Memory References
PACT '07 Proceedings of the 16th International Conference on Parallel Architecture and Compilation Techniques
Algorithms for packet classification
IEEE Network: The Magazine of Global Internetworking
Hi-index | 0.00 |
Digest caches have been proposed as an effective method tospeed up packet classification in network processors. In this paper, weshow that the presence of a large number of small flows and a few largeflows in the Internet has an adverse impact on the performance of thesedigest caches. In the Internet, a few large flows transfer a majority ofthe packets whereas the contribution of several small flows to the totalnumber of packets transferred is small. In such a scenario, the LRUcache replacement policy, which gives maximum priority to the mostrecently accessed digest, tends to evict digests belonging to the few largeflows. We propose a new cache management algorithm called SaturatingPriority (SP) which aims at improving the performance of digest cachesin network processors by exploiting the disparity between the number offlows and the number of packets transferred. Our experimental resultsdemonstrate that SP performs better than the widely used LRU cachereplacement policy in size constrained caches. Further, we characterizethe misses experienced by flow identifiers in digest caches.