ATUM: a new technique for capturing address traces using microcode
ISCA '86 Proceedings of the 13th annual international symposium on Computer architecture
ACM Transactions on Computer Systems (TOCS)
The rice parallel processing testbed
SIGMETRICS '88 Proceedings of the 1988 ACM SIGMETRICS conference on Measurement and modeling of computer systems
Evaluating Associativity in CPU Caches
IEEE Transactions on Computers
Efficient simulation of cache memories
WSC '89 Proceedings of the 21st conference on Winter simulation
Page placement algorithms for large real-indexed caches
ACM Transactions on Computer Systems (TOCS)
The effect of page allocation on caches
MICRO 25 Proceedings of the 25th annual international symposium on Microarchitecture
The TLB slice—a low-cost high-speed address translation mechanism
ISCA '90 Proceedings of the 17th annual international symposium on Computer Architecture
Cache conflict resolution through detection, analysis and dynamic remapping of active pages
ACM-SE 38 Proceedings of the 38th annual on Southeast regional conference
Hi-index | 0.00 |
Historically, the majority of virtual storage operating systems have used random virtual page placement. Random placement interacts undesirably with a direct-mapped cache to produce cache conflict misses, some of which could be avoided by making better placement decisions. Recently, several studies on careful page placement as an alternative to random placement have shown that a direct-mapped cache managed with careful placement performs nearly as well as a two-way set-associative cache under random placement. Towards a performance evaluation methodology for careful page placement, we propose two new classes of theoretical page placement policies for direct-mapped caches based on memory reference string lookahead. Lookahead page placement is a systems modeling tool for evaluating nonlookahead policy performance and providing insight into potential gains that might be achieved with improved nonlookahead policies. Strict lookahead policies perform virtual page mappings using only memory reference lookahead information. Hybrid lookahead policies combine existing careful page placement methods with future knowledge obtained through lookahead. Our lookahead policies use greedy, polynomial-time bin selection procedures to assign virtual pages to cache bins having favorable future usage characteristics. Trace-driven simulation is used to compare three different lookahead policies against several nonlookahead page placement policies for three multiprogrammed UNIX workloads.