Implementation and performance of integrated application-controlled file caching, prefetching, and disk scheduling

  • Authors:
  • Pei Cao;Edward W. Felten;Anna R. Karlin;Kai Li

  • Affiliations:
  • Princeton Univ., Princeton, NJ;Princeton Univ., Princeton, NJ;Univ. of Washington, Seattle, WA;Princeton Univ., Princeton, NJ

  • Venue:
  • ACM Transactions on Computer Systems (TOCS)
  • Year:
  • 1996

Quantified Score

Hi-index 0.01

Visualization

Abstract

As the performance gap between disks and micropocessors continues to increase, effective utilization of the file cache becomes increasingly immportant. Application-controlled file caching and prefetching can apply application-specific knowledge to improve file cache management. However, supporting application-controlled file caching and prefetching is nontrivial because caching and prefetching need to be integrated carefully, and the kernel needs to allocate cache blocks among processes appropriately. This article presents the design, implementation, and performance of a file system that integrates application-controlled caching, prefetching, and disk scheduling. We use a two-level cache management strategy. The kernel uses the LRU-SP (Least-Recently-Used with Swapping and Placeholders) policy to allocate blocks to processes, and each process integrates application-specific caching and prefetching based on the controlled-aggressive policy, an algorithm previously shown in a theoretical sense to be nearly optimal. Each process also improves its disk access latency by submittint its prefetches in batches so that the requests can be scheduled to optimize disk access performance. Our measurements show that this combination of techniques greatly improves the performance of the file system. We measured that the running time is reduced by 3% to 49% (average 26%) for single-process workloads and by 5% to 76% (average 32%) for multiprocess workloads.