A scalable low-latency cache invalidation strategy for mobile environments
MobiCom '00 Proceedings of the 6th annual international conference on Mobile computing and networking
Proactive Power-Aware Cache Management for Mobile Computing Systems
IEEE Transactions on Computers
Volume Leases for Consistency in Large-Scale Systems
IEEE Transactions on Knowledge and Data Engineering
Cache access and replacement for future wireless Internet
IEEE Communications Magazine
Proxy-based hybrid cache management in Mobile IP systems
Information Processing Letters
Scalability study of cache access mechanisms in multiple-cell wireless networks
Computer Networks: The International Journal of Computer and Telecommunications Networking
Cooperative wireless data access algorithms in multi-radio wireless networks
CCNC'10 Proceedings of the 7th IEEE conference on Consumer communications and networking conference
Journal of Parallel and Distributed Computing
SPR proxy mechanism for 3GPP Policy and Charging Control System
Computer Networks: The International Journal of Computer and Telecommunications Networking
Hi-index | 0.00 |
Strongly consistent callback cache mechanisms have been studied for data access in wireless networks. In cache access mechanisms, update information is extremely important since an updated data object in a remote server makes the corresponding data objects invalidated in mobile terminals (MTs), and the data object cache hit information in those MTs becomes almost useless. In this paper, we propose an adaptive access mechanism, called optimal callback with two-level adaptation. In the first-level adaptation, cache size in an MT is adaptively adjusted based on update-to-access-ratio (UAR), defined as the average number of updates per data object access. The range of the cache size is [0, M], where M is the maximum physical cache size of the MT. Two extreme cases are given as follows: 1) When the UAR is very large so that objects in the cache are always obsolete, the cache should not be used and, therefore, the cache size should be set to zero; 2) when the UAR is zero so that every object in the cache is valid, the cache size should be set to M. Under other situations, the cache size is dynamically changed between 0 and M. Define U{\hbox{-}}{\rm threshold} of the UAR for any object, a particular important threshold, as a UAR value, beyond which the object should be not cached at all. The idea of the second-level adaptation is that if an object size is small, sending back the object may be a better choice than sending back an invalidation message when the object is updated. Therefore, when an object is updated at the server, it is sent directly to MTs if the object size is smaller than a threshold, called Push Threshold (T); otherwise, an invalidation message is sent to the MTs. We analytically model cost function for the proposed adaptive scheme as the total traffic involved between the server and an MT per data object access, and the optimal cache size and the optimal T value are obtained simultaneously to minimize the cost function. Furthermore, U{\hbox{-}}{\rm threshold} is derived analytically. Both simulations and analytical results are used to study and compare the performance of the proposed scheme with several others under many different scenarios.