Algorithms for advanced packet classification with ternary CAMs
Proceedings of the 2005 conference on Applications, technologies, architectures, and protocols for computer communications
Survey and taxonomy of packet classification techniques
ACM Computing Surveys (CSUR)
Low power architecture for high speed packet classification
Proceedings of the 4th ACM/IEEE Symposium on Architectures for Networking and Communications Systems
A Memory-Efficient FPGA-based Classification Engine
FCCM '08 Proceedings of the 2008 16th International Symposium on Field-Programmable Custom Computing Machines
A Scalable High Throughput Firewall in FPGA
FCCM '08 Proceedings of the 2008 16th International Symposium on Field-Programmable Custom Computing Machines
Fast and scalable packet classification using perfect hash functions
Proceedings of the ACM/SIGDA international symposium on Field programmable gate arrays
Large-scale wire-speed packet classification on FPGAs
Proceedings of the ACM/SIGDA international symposium on Field programmable gate arrays
Fast and scalable packet classification using perfect hash functions
Proceedings of the ACM/SIGDA international symposium on Field programmable gate arrays
A High-Speed and Memory Efficient Pipeline Architecture for Packet Classification
FCCM '10 Proceedings of the 2010 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines
HaRP: Rapid Packet Classification via Hashing Round-Down Prefixes
IEEE Transactions on Parallel and Distributed Systems
Hi-index | 0.00 |
This article pursues speedy packet classification with low on-chip memory requirements realized on Xilinx Virtext-6 FPGA. Based on hashing round-down prefixes specified in filter rules (dubbed HaRP), our implemented classifier is demonstrated to exhibit an extremely low on-chip memory requirement (lowering the byte count per rule by a factor of 8.6 in comparison with its most recent counterpart [2]), taking only 50% of Virtex-6 on-chip memory to store every large rule dataset (with some 30K rules) examined. In addition, it achieves a higher throughput than any known FPGA implementation, reaching more than 200 MPPS (millions packet lookups per second) with 8 processing units and 8 memory banks in the HaRP pipeline to support the line rate over 130 Gbps under bi-directional traffic in the worst case with 40-byte packets. By reducing memory probes per lookup, enhanced HaRP can further boost the classification speed to 255 MPPS.