Implementing cooperative prefetching and caching in a globally-managed memory system

  • Authors:
  • Geoffrey M. Voelker;Eric J. Anderson;Tracy Kimbrel;Michael J. Feeley;Jeffrey S. Chase;Anna R. Karlin;Henry M. Levy

  • Affiliations:
  • Department of Computer Science and Engineering, University of Washington;Department of Computer Science and Engineering, University of Washington;Department of Computer Science and Engineering, University of Washington and IBM T.J. Watson Research Center;Department of Computer Science and Engineering, University of Washington and Department of Computer Science, University of British Columbia;Department of Computer Science and Engineering, University of Washington and Department of Computer Science, Duke University;Department of Computer Science and Engineering, University of Washington;Department of Computer Science and Engineering, University of Washington

  • Venue:
  • SIGMETRICS '98/PERFORMANCE '98 Proceedings of the 1998 ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems
  • Year:
  • 1998

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents cooperative prefetching and caching --- the use of network-wide global resources (memories, CPUs, and disks) to support prefetching and caching in the presence of hints of future demands. Cooperative prefetching and caching effectively unites disk-latency reduction techniques from three lines of research: prefetching algorithms, cluster-wide memory management, and parallel I/O. When used together, these techniques greatly increase the power of prefetching relative to a conventional (non-global-memory) system. We have designed and implemented PGMS, a cooperative prefetching and caching system, under the Digital Unix operating system running on a 1.28 Gb/sec Myrinet-connected cluster of DEC Alpha workstations. Our measurements and analysis show that by using available global resources, cooperative prefetching can obtain significant speedups for I/O-bound programs. For example, for a graphics rendering application, our system achieves a speedup of 4.9 over a non-prefetching version of the same program, and a 3.1-fold improvement over that program using local-disk prefetching alone.