Connection caching under various models of communication

  • Authors:
  • Edith Cohen;Haim Kaplan;Uri Zwick

  • Affiliations:
  • AT&T Labs-Research, 180 Park Avenue, Florham Park, NJ;Computer Science Department, Tel-Aviv University, Tel-Aviv 69978, Israel;Computer Science Department, Tel-Aviv University, Tel-Aviv 69978, Israel

  • Venue:
  • Proceedings of the twelfth annual ACM symposium on Parallel algorithms and architectures
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Motivated by Web applications, we recently introduced the following theoretical model for connection-caching: Each host on a network can maintain (cache) a limited number of connections to other hosts. A message can be transmitted from one host to another only if the connection between these two hosts is open, i.e., it is cached by both endpoints. If a message request arrives and the respective connection is not open (a miss), the connection needs to be established and certain activation cost is incurred. The establishment of the new connection may force the termination (eviction) of other connections at each endpoint.The distributed nature of connection-caching makes it considerably more involved than the standanrd caching problem. It also makes it necessary to specify the type and amount of communication the different hosts are allowed to use to coordinate the contents of their caches. We consider three different models of communication. In the first and most basic model, hosts are allowed no extra communication. In particular, they are not notified when a connection is closed by its other endpoint. In the second model, hosts are notified of connection closures but are allowed no additional communication. In the third model, hosts are allowed to exchange information regarding shared open connections. The second model of communication corresponds to TCP under normal network conditions, the first corresponds to network failures. Although the third model is not yet implemented, we believe that it is interesting to explore its potential benefits. We show that all algorithms belonging to a natural class of marking algorithms, working under the third communication model, are optimally competitive, i.e., achieve a competitive ratio of k, where k is the size of the largest cache in the network. This answers the main open problem left in [7]. Interestingly, all these optimally competitive algorithms exchange at most one extra bit per open connection. We also present an optimally competitive connection-caching algorithm that works under the second model. These results show that optimal competitiveness can be achieved with very limited communication.Finally, we also consider randomized marking algorithms and show that they are O(log k)-competitive.