Small forwarding tables for fast routing lookups
SIGCOMM '97 Proceedings of the ACM SIGCOMM '97 conference on Applications, technologies, architectures, and protocols for computer communication
Scalable high speed IP routing lookups
SIGCOMM '97 Proceedings of the ACM SIGCOMM '97 conference on Applications, technologies, architectures, and protocols for computer communication
High-speed policy-based packet forwarding using efficient multi-dimensional range matching
Proceedings of the ACM SIGCOMM '98 conference on Applications, technologies, architectures, and protocols for computer communication
Router plugins: a software architecture for next generation routers
Proceedings of the ACM SIGCOMM '98 conference on Applications, technologies, architectures, and protocols for computer communication
Packet classification on multiple fields
Proceedings of the conference on Applications, technologies, architectures, and protocols for computer communication
Theory, Volume 1, Queueing Systems
Theory, Volume 1, Queueing Systems
Hi-index | 0.24 |
Next generation access routers and edge devices need to provide functionalities, for layer-4 packet forwarding and firewall/security checks. Consequently, a challenging issue concerns how to achieve fast packet filtering and forwarding at low cost. This paper studies the flow caching mechanisms for fast layer-4 packet forwarding. We show by model analysis that flow caching performance is not very sensitive to the flow cache table lookup speed but it is sensitive to the cache hit ratio. By making use of the available layer-4 information, we introduce two filtering modules to enhance the cache hit ratio. We demonstrate, by real trace simulation, that by adding these two filtering modules, the cache miss ratio can be reduced by up to 50% and the full header filtering speed reduced by up to five-fold. The proposed flow caching mechanism is potentially useful for accessing routers and edge devices where costs are at a premium and where software based filtering modules are dynamically generated.