Supporting reference and dirty bits in SPUR's virtual address cache

  • Authors:
  • D. A. Wood;R. H. Katz

  • Affiliations:
  • Computer Science Division, Electrical Engineering and Computer Science Department, University of California, Berkeley, Berkeley, CA;Computer Science Division, Electrical Engineering and Computer Science Department, University of California, Berkeley, Berkeley, CA

  • Venue:
  • ISCA '89 Proceedings of the 16th annual international symposium on Computer architecture
  • Year:
  • 1989

Quantified Score

Hi-index 0.00

Visualization

Abstract

Virtual address caches can provide faster access times than physical address caches, because translation is only required on cache misses. However, because we don't check the translation information on each cache access, maintaining reference and dirty bits is more difficult. In this paper we examine the trade-offs in supporting reference and dirty bits in a virtual address cache. We use measurements from a uniprocessor SPUR prototype to evaluate different alternatives. The prototype's built-in performance counters make it easy to determine the frequency of important events and to calculate performance metrics.Our results indicate that dirty bits can be efficiently emulated with protection, and thus require no special hardware support. Although this can lead to excess faults when previously cached blocks are written, these account for only 19% of the total faults, on average. For reference bits, a miss bit approximation, which checks the references bits only on cache misses, leads to more page faults at smaller memory sizes. However, the additional overhead required to maintain true reference bits far exceeds the benefits of a lower fault rate.