Analysis and simulation of a fair queueing algorithm
SIGCOMM '89 Symposium proceedings on Communications architectures & protocols
IEEE/ACM Transactions on Networking (TON)
High performance TCP in ANSNET
ACM SIGCOMM Computer Communication Review
An optimal service policy for buffer systems
Journal of the ACM (JACM)
Computer architecture (2nd ed.): a quantitative approach
Computer architecture (2nd ed.): a quantitative approach
Maximizing performance in a striped disk array
ISCA '90 Proceedings of the 17th annual international symposium on Computer Architecture
Proceedings of the 27th annual international symposium on Computer architecture
Buffer size requirements under longest queue first
Proceedings of the IFIP WG 7.3 International Conference on Performance of Distributed Systems and Integrated Communication Networks
Internet growth: is there a "Moore's law" for data traffic?
Handbook of massive data sets
Access ordering and memory-conscious cache utilization
HPCA '95 Proceedings of the 1st IEEE Symposium on High-Performance Computer Architecture
Distributed Prefetch-buffer/Cache Design for High Performance Memory Systems
HPCA '96 Proceedings of the 2nd IEEE Symposium on High-Performance Computer Architecture
Access Order and Effective Bandwidth for Streams on a Direct Rambus Memory
HPCA '99 Proceedings of the 5th International Symposium on High Performance Computer Architecture
Command Vector Memory Systems: High Performance at Low Cost
PACT '98 Proceedings of the 1998 International Conference on Parallel Architectures and Compilation Techniques
Reducing DRAM Latencies with an Integrated Memory Hierarchy Design
HPCA '01 Proceedings of the 7th International Symposium on High-Performance Computer Architecture
Design of Randomized Multichannel Packet Storage for High Performance Routers
HOTI '05 Proceedings of the 13th Symposium on High Performance Interconnects
A DRAM/SRAM Memory Scheme for Fast Packet Buffers
IEEE Transactions on Computers
On queuing lengths in on-line switching
Theoretical Computer Science
Computer Networks: The International Journal of Computer and Telecommunications Networking
A simple memory management system for packet buffers
DNCOCO'09 Proceedings of the 8th WSEAS international conference on Data networks, communications, computers
Constructing optical buffers with switches and fiber delay lines
APCC'09 Proceedings of the 15th Asia-Pacific conference on Communications
IEEE Journal on Selected Areas in Communications
Proceedings of the ACM SIGCOMM 2010 conference
A block-based reservation architecture for the implementation of large packet buffers
Proceedings of the 5th ACM/IEEE Symposium on Architectures for Networking and Communications Systems
Future internet video multicasting with essentially perfect resource utilization and QoS guarantees
Proceedings of the Nineteenth International Workshop on Quality of Service
NEW2AN'11/ruSMART'11 Proceedings of the 11th international conference and 4th international conference on Smart spaces and next generation wired/wireless networking
Adapting router buffers for energy efficiency
Proceedings of the Seventh COnference on emerging Networking EXperiments and Technologies
Fast dynamic multiple-set membership testing using combinatorial bloom filters
IEEE/ACM Transactions on Networking (TON)
Less is more: trading a little bandwidth for ultra-low latency in the data center
NSDI'12 Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation
Low latency energy efficient communications in global-scale cloud computing systems
Proceedings of the 2013 workshop on Energy efficient high performance parallel and distributed computing
High-fidelity per-flow delay measurements with reference latency interpolation
IEEE/ACM Transactions on Networking (TON)
Hi-index | 0.00 |
Internet routers and Ethernet switches contain packet buffers to hold packets during times of congestion. Packet buffers are at the heart of every packet switch and router, which have a combined annual market of tens of billions of dollars, and equipment vendors spend hundreds of millions of dollars on memory each year. Designing packet buffers used to be easy: DRAM was cheap, low power and widely used. But something happened at 10 Gb/s when packets started to arrive and depart faster than the access time of a DRAM. Alternative memories were needed, but SRAM is too expensive and power-hungry. A caching solution is appealing, with a hierarchy of SRAM and DRAM, as used by the computer industry. However, in switches and routers it is not acceptable to have a "miss-rate" as it reduces throughput and breaks pipelines. In this paper we describe how to build caches with 100% hit-rate under all conditions, by exploiting the fact that switches and routers always store data in FIFO queues. We describe a number of different ways to do it, with and without pipelining, with static or dynamic allocation of memory. In each case, we prove a lower bound on how big the cache needs to be, and propose an algorithm that meets, or comes close, to the lower bound. These techniques are practical and have been implemented in fast silicon; as a result, we expect the techniques to fundamentally change the way switches and routers use external memory.