Amortized efficiency of list update and paging rules
Communications of the ACM
New results on server problems
SIAM Journal on Discrete Mathematics
Journal of Algorithms
Competitive paging and dual-guided on-line weighted caching and watching algorithms
Competitive paging and dual-guided on-line weighted caching and watching algorithms
Online computation and competitive analysis
Online computation and competitive analysis
STOC '99 Proceedings of the thirty-first annual ACM symposium on Theory of computing
Proceedings of the ninth annual ACM-SIAM symposium on Discrete algorithms
Memory Versus Randomization in On-line Algorithms (Extended Abstract)
ICALP '89 Proceedings of the 16th International Colloquium on Automata, Languages and Programming
Competive Analysis of Randomized Paging Algorithms
ESA '96 Proceedings of the Fourth Annual European Symposium on Algorithms
Some Algorithmic Problems in Large Networks
ESA '01 Proceedings of the 9th Annual European Symposium on Algorithms
Connection caching: model and algorithms
Journal of Computer and System Sciences
Paging with connections: FIFO strikes again
Theoretical Computer Science
ICALP '09 Proceedings of the 36th International Colloquium on Automata, Languages and Programming: Part I
A study of integrated document and connection caching
ICALP'03 Proceedings of the 30th international conference on Automata, languages and programming
Exploiting fine grained parallelism for acceleration of web retrieval
HSI'05 Proceedings of the 3rd international conference on Human Society@Internet: web and Communication Technologies and Internet-Related Social Issues
Hi-index | 0.00 |
Motivated by Web applications, we recently introduced the following theoretical model for connection-caching: Each host on a network can maintain (cache) a limited number of connections to other hosts. A message can be transmitted from one host to another only if the connection between these two hosts is open, i.e., it is cached by both endpoints. If a message request arrives and the respective connection is not open (a miss), the connection needs to be established and certain activation cost is incurred. The establishment of the new connection may force the termination (eviction) of other connections at each endpoint.The distributed nature of connection-caching makes it considerably more involved than the standanrd caching problem. It also makes it necessary to specify the type and amount of communication the different hosts are allowed to use to coordinate the contents of their caches. We consider three different models of communication. In the first and most basic model, hosts are allowed no extra communication. In particular, they are not notified when a connection is closed by its other endpoint. In the second model, hosts are notified of connection closures but are allowed no additional communication. In the third model, hosts are allowed to exchange information regarding shared open connections. The second model of communication corresponds to TCP under normal network conditions, the first corresponds to network failures. Although the third model is not yet implemented, we believe that it is interesting to explore its potential benefits. We show that all algorithms belonging to a natural class of marking algorithms, working under the third communication model, are optimally competitive, i.e., achieve a competitive ratio of k, where k is the size of the largest cache in the network. This answers the main open problem left in [7]. Interestingly, all these optimally competitive algorithms exchange at most one extra bit per open connection. We also present an optimally competitive connection-caching algorithm that works under the second model. These results show that optimal competitiveness can be achieved with very limited communication.Finally, we also consider randomized marking algorithms and show that they are O(log k)-competitive.