A case for caching file objects inside internetworks
SIGCOMM '93 Conference proceedings on Communications architectures, protocols and applications
A data-oriented (and beyond) network architecture
Proceedings of the 2007 conference on Applications, technologies, architectures, and protocols for computer communications
Packet caches on routers: the implications of universal redundant traffic elimination
Proceedings of the ACM SIGCOMM 2008 conference on Data communication
Inside the bird's nest: measurements of large-scale live VoD from the 2008 olympics
Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference
Proceedings of the 5th international conference on Emerging networking experiments and technologies
EndRE: an end-system redundancy elimination service for enterprises
NSDI'10 Proceedings of the 7th USENIX conference on Networked systems design and implementation
On content-centric router design and implications
Proceedings of the Re-Architecting the Internet Workshop
Towards understanding modern web traffic
ACM SIGMETRICS Performance Evaluation Review - Performance evaluation review
Going viral: flash crowds in an open CDN
Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference
Deadline-based resource management for information-centric networks
Proceedings of the 3rd ACM SIGCOMM workshop on Information-centric networking
Hi-index | 0.00 |
Today, content replication methods are common ways of reducing the network and servers load. Present content replication solutions have different problems, including the need for pre-planning and management, and they are ineffective in case of sudden traffic spikes. In spite of these problems, content replication methods are more popular today than ever, simply because of an increasing need for load reduction. In this paper, we propose a shared buffering model that, unlike current proxy-based content replication methods, is native to the network and can be used to alleviate the stress of sudden traffic spikes on servers and the network. We outline the characteristics of a new transport protocol that uses the shared buffers to offload the server work to the network or reduce the pressure on the overloaded links.