Randomized algorithms
Removal policies in network caches for World-Wide Web documents
Conference proceedings on Applications, technologies, architectures, and protocols for computer communications
Proxy caching that estimates page load delays
Selected papers from the sixth international conference on World Wide Web
Replacement policies for a proxy cache
IEEE/ACM Transactions on Networking (TON)
Operating System Concepts
Characteristics of WWW Client-based Traces
Characteristics of WWW Client-based Traces
An Efficient Randomized Algorithm for Input-Queued Switch Scheduling
HOTI '01 Proceedings of the The Ninth Symposium on High Performance Interconnects
Cost-aware WWW proxy caching algorithms
USITS'97 Proceedings of the USENIX Symposium on Internet Technologies and Systems on USENIX Symposium on Internet Technologies and Systems
Modeling correlations in web traces and implications for designing replacement policies
Computer Networks: The International Journal of Computer and Telecommunications Networking
An effective cache replacement algorithm in transcoding-enabled proxies
The Journal of Supercomputing
Coordinated multimedia object replacement in transcoding proxies
The Journal of Supercomputing
Journal of Network and Computer Applications
HEC: improving endurance of high performance flash-based cache devices
Proceedings of the 6th International Systems and Storage Conference
Hi-index | 0.00 |
The problem of document replacement in web caches has received much attention in recent research, and it has been shown that the eviction rule "replace the least recently used document" performs poorly in web caches. Instead, it has been shown that using a combination of several criteria, such as the recentness and frequency of use, the size, and the cost of fetching a document, leads to a sizable improvement in hit rate and latency reduction. However, in order to implement these novel schemes, one needs to maintain complicated data structures. We propose randomized algorithms for approximating any existing web-cache replacement scheme and thereby avoid the need for any data structures.At document-replacement times, the randomized algorithm samples N documents from the cache and replaces the least useful document from the sample, where usefulness is determined according to the criteria mentioned above. The next M N least useful documents are retained for the succeeding iteration. When the next replacement is to be performed, the algorithm obtains N --- M new samples from the cache and replaces the least useful document from the N --- M new samples and the M previously retained. Using theory and simulations, we analyze the algorithm and find that it matches the performance of existing document replacement schemes for values of N and M as low as 8 and 2 respectively. Interestingly, we find that retaining a small number of samples from one iteration to the next leads to an exponential improvement in performance as compared to retaining no samples at all.