Computer
An in-cache address translation mechanism
ISCA '86 Proceedings of the 13th annual international symposium on Computer architecture
The Sprite Network Operating System
Computer
Reduced instruction set computers
Communications of the ACM - Special section on computer architecture
Implementing a cache consistency protocol
ISCA '85 Proceedings of the 12th annual international symposium on Computer architecture
ACM Computing Surveys (CSUR)
Performance Improvements and Functional Enhancements in 4.3BSD
Performance Improvements and Functional Enhancements in 4.3BSD
Virtual Memory for the Sprite Operating System
Virtual Memory for the Sprite Operating System
SPUR Lisp: Design and Implementation
SPUR Lisp: Design and Implementation
SPUR Memory System Architecture
SPUR Memory System Architecture
Virtual storage management in the absence of reference bits
Virtual storage management in the absence of reference bits
Eliminating the address translation bottleneck for physical address cache
ASPLOS V Proceedings of the fifth international conference on Architectural support for programming languages and operating systems
A memory management unit and cache controller for the MARS system
MICRO 23 Proceedings of the 23rd annual workshop and symposium on Microprogramming and microarchitecture
Functional Implementation Techniques for CPU Cache Memories
IEEE Transactions on Computers - Special issue on cache memory and related problems
Trace-driven simulations for a two-level cache design in open bus systems
ISCA '90 Proceedings of the 17th annual international symposium on Computer Architecture
Hi-index | 0.00 |
Virtual address caches can provide faster access times than physical address caches, because translation is only required on cache misses. However, because we don't check the translation information on each cache access, maintaining reference and dirty bits is more difficult. In this paper we examine the trade-offs in supporting reference and dirty bits in a virtual address cache. We use measurements from a uniprocessor SPUR prototype to evaluate different alternatives. The prototype's built-in performance counters make it easy to determine the frequency of important events and to calculate performance metrics.Our results indicate that dirty bits can be efficiently emulated with protection, and thus require no special hardware support. Although this can lead to excess faults when previously cached blocks are written, these account for only 19% of the total faults, on average. For reference bits, a miss bit approximation, which checks the references bits only on cache misses, leads to more page faults at smaller memory sizes. However, the additional overhead required to maintain true reference bits far exceeds the benefits of a lower fault rate.