Using predictive prefetching to improve World Wide Web latency
ACM SIGCOMM Computer Communication Review
ACM Transactions on Computer Systems (TOCS)
Web prefetching between low-bandwidth clients and proxies: potential and performance
SIGMETRICS '99 Proceedings of the 1999 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Page replacement for general caching problems
Proceedings of the tenth annual ACM-SIAM symposium on Discrete algorithms
Proceedings of the ninth annual ACM-SIAM symposium on Discrete algorithms
A unified approach to approximating resource allocation and scheduling
STOC '00 Proceedings of the thirty-second annual ACM symposium on Theory of computing
Minimizing stall time in single and parallel disk systems
Journal of the ACM (JACM)
A Prefetching Protocol Using Client Speculation for the WWW
A Prefetching Protocol Using Client Speculation for the WWW
Near-optimal parallel prefetching and caching
FOCS '96 Proceedings of the 37th Annual Symposium on Foundations of Computer Science
USITS'99 Proceedings of the 2nd conference on USENIX Symposium on Internet Technologies and Systems - Volume 2
Exploring the bounds of web latency reduction from caching and prefetching
USITS'97 Proceedings of the USENIX Symposium on Internet Technologies and Systems on USENIX Symposium on Internet Technologies and Systems
Cost-aware WWW proxy caching algorithms
USITS'97 Proceedings of the USENIX Symposium on Internet Technologies and Systems on USENIX Symposium on Internet Technologies and Systems
Hi-index | 0.00 |
Caching and prefetching have often been studied as separate tools for enhancing the access to the World Wide Web. The goal of this work is to propose integrated Caching and Prefetching Algorithms for improving the performances of web navigation. We propose a new prefetching algorithm that uses a limited form of user cooperation to establish which documents to prefetch in the local cache at the client side. We show that our prefetching technique is highly beneficial only if integrated with a suitable caching algorithm.We consider two caching algorithms, Greedy-Dual-Size [6,17] and Least Recently Used, and demonstrate on trace driven simulation that Greedy-Dual-Size with prefetching outperforms both LRU with prefetching and a set of other popular caching algorithms.