The LRU-K page replacement algorithm for database disk buffering
SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
A caching relay for the World Wide Web
Selected papers of the first conference on World-Wide Web
Removal policies in network caches for World-Wide Web documents
Conference proceedings on Applications, technologies, architectures, and protocols for computer communications
Proxy caching that estimates page load delays
Selected papers from the sixth international conference on World Wide Web
A case for delay-conscious caching of Web documents
Selected papers from the sixth international conference on World Wide Web
Replacement policies for a proxy cache
IEEE/ACM Transactions on Networking (TON)
Caching Proxies: Limitations and Potentials
Caching Proxies: Limitations and Potentials
Cost-aware WWW proxy caching algorithms
USITS'97 Proceedings of the USENIX Symposium on Internet Technologies and Systems on USENIX Symposium on Internet Technologies and Systems
Scalable Service Differentiation in a Shared Storage Cache
ICDCS '03 Proceedings of the 23rd International Conference on Distributed Computing Systems
Web cache management based on the expected cost of web objects
Information and Software Technology
Optimal Web cache sizing: scalable methods for exact solutions
Computer Communications
Replica-aware caching for Web proxies
Computer Communications
Novel Approaches for Integrating MART1 Clustering Based Pre-Fetching Technique with Web Caching
International Journal of Information Technology and Web Engineering
Hi-index | 0.00 |
With the increase in popularity of the World Wide Web, the research community has recently seen a proliferation of Web caching algorithms. This paper presents a new such algorithm, that is efficient and robust, called Least Unified-Value (LUV). LUV evaluates a Web document based on its cost normalized by the likelihood of it being re-referenced. This results in a normalized assessment of the contribution to the value of a document, leading to a fair replacement policy. LUV can conform to arbitrary cost functions of Web documents, so it can optimize any particular performance measure of interest, such as the hit rate, the byte hit rate, or the delay-savings ratio. Unlike most existing algorithms, LUV exploits complete reference history of documents, in terms of reference frequency and recency, to estimate the likelihood of being re-referenced. Nevertheless, LUV allows for an efficient implementation in both space and time complexities. The space needed to maintain the reference history of a document is only a few bytes and furthermore, the time complexity of the algorithm is O(log2n), where n is the number of documents in the cache. Trace-driven simulations show that the LUV algorithm outperforms existing algorithms for various performance measures for a wide range of cache configurations.