A case for caching file objects inside internetworks
SIGCOMM '93 Conference proceedings on Communications architectures, protocols and applications
Selected papers of the first conference on World-Wide Web
Web server workload characterization: the search for invariants
Proceedings of the 1996 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
IEEE/ACM Transactions on Networking (TON)
The case for geographical push-caching
HOTOS '95 Proceedings of the Fifth Workshop on Hot Topics in Operating Systems (HotOS-V)
SPDP '95 Proceedings of the 7th IEEE Symposium on Parallel and Distributeed Processing
Application-Level Document Caching in the Internet
Application-Level Document Caching in the Internet
Design Considerations for Distributed Caching on the Internet
ICDCS '99 Proceedings of the 19th IEEE International Conference on Distributed Computing Systems
A hierarchical internet object cache
ATEC '96 Proceedings of the 1996 annual conference on USENIX Annual Technical Conference
The output of a cache under the independent reference model: where did the locality of reference go?
Proceedings of the joint international conference on Measurement and modeling of computer systems
Virtual video caching: a scalable and generic technique for improved quality of video service
Journal of High Speed Networks
International Journal of Computers and Applications
Hi-index | 0.24 |
The widespread use of the Internet has created two problems: document retrieval latency and network traffic. Caching of documents 'close' to users has helped alleviate both problems. Different caching policies have been proposed/implemented to make best use of limited available cache at each caching server. A mesh of caching servers, aided by different data diffusion algorithms and the natural hierarchical structure of the Internet topology, has increased 'virtual' size of cache. Yet the size of available cache is small compared to the total size of all documents served, and remains a major resource constraint. In this work, we looked at how to improve document download time, by distributing a fixed amount of total storage in a network or mesh of caches. The intuition behind our cache distribution approach is to give more storage to the caching nodes in the network, which experience more traffic, in the hope that this will reduce the average latency of document retrieval in the network. A heuristic was developed to estimate traffic at each cache of a network. From this traffic estimation, each cache then receives a corresponding percentage of the total storage capacity of the network. Through extensive simulation it is found that the proposed cache distribution algorithm can reduce latency up to 80% over prior work that includes both Harvest-type and demand-driven data diffusion algorithms. Furthermore, the best improvement was achieved in a cache range that corresponds to practical, real world cache ranges.