Informed multi-process prefetching and caching

  • Authors:
  • Andrew Tomkins;R. Hugo Patterson;Garth Gibson

  • Affiliations:
  • Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA;Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA;Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA

  • Venue:
  • SIGMETRICS '97 Proceedings of the 1997 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

Informed prefetching and caching based on application disclosure of future I/O accesses (hints) can dramatically reduce the execution time of I/O-intensive applications. A recent study showed that, in the context of a single hinting application, prefetching and caching algorithms should adapt to the dynamic load on the disks to obtain the best performance. In this paper, we show how to incorporate adaptivity to disk load into the TIP2 system, which uses cost-benefit analysis to allocate global resources among multiple processes. We compare the resulting system, which we call TIPTOE (TIP with Temporal Overload Estimators) to Cao et al's LRU-SP allocation scheme, also modified to include adaptive prefetching. Using disk-accurate trace-driven simulation we show that, averaged over eleven experiments involving pairs of hinting applications, and with data striped over one to ten disks, TIPTOE delivers 7% lower execution time than LRU-SP. Where the computation and I/O demands of each experiment are closely matched, in a two-disk array, TIPTOE delivers 18% lower execution time.