A Framework for Cache Management for Mobile Databases: Design and Evaluation

  • Authors:
  • Boris Y. Chan;Antonio Si;Hong V. Leong

  • Affiliations:
  • Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong;Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong. antonio.si@oracle.com;Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong

  • Venue:
  • Distributed and Parallel Databases
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

In a mobile computing environment, database servers disseminate information to multiple mobile clients via wireless channels. Due to the low bandwidth and low reliability of wireless channels, it is important for a mobile client to cache its frequently accessed database items into its local storage. This improves performance of database queries and improves availability of database items for query processing during disconnection. In this paper, we investigate issues on caching granularity, coherence strategy, and replacement policy of caching mechanisms for a mobile environment utilizing point-to-point communication paradigm.We first illustrate that page-based caching is not suitable in the mobile context due to the lack of locality among database items. We propose three different levels of caching granularity: attribute caching, object caching, and hybrid caching, a hybrid approach of attribute and object caching. Next, we show that existing coherence strategies are inappropriate due to frequent disconnection in a mobile environment, and propose a cache coherence strategy, based on the update patterns of database items. Via a detail simulation model, we examine the performance of various levels of caching granularity with our cache coherence strategy. We observe, in general, that hybrid caching could achieve a better performance. Finally, we propose several cache replacement policies that can adapt to the access patterns of database items. For each given caching granularity, we discover that our replacement policies outperform conventional ones in most situations.