Page placement algorithms for large real-indexed caches
ACM Transactions on Computer Systems (TOCS)
Application-controlled physical memory using external page-cache management
ASPLOS V Proceedings of the fifth international conference on Architectural support for programming languages and operating systems
Improving IPC by kernel design
SOSP '93 Proceedings of the fourteenth ACM symposium on Operating systems principles
Avoiding conflict misses dynamically in large direct-mapped caches
ASPLOS VI Proceedings of the sixth international conference on Architectural support for programming languages and operating systems
Instruction fetching: coping with code bloat
ISCA '95 Proceedings of the 22nd annual international symposium on Computer architecture
IEEE Transactions on Computers
Improving performance by cache driven memory management
HPCA '95 Proceedings of the 1st IEEE Symposium on High-Performance Computer Architecture
U-cache: a cost-effective solution to synonym problem
HPCA '95 Proceedings of the 1st IEEE Symposium on High-Performance Computer Architecture
W-Order scan: minimizing cache pollution by application software level cache management for MMDB
WAIM'11 Proceedings of the 12th international conference on Web-age information management
Hi-index | 0.00 |
A simple modification to an operating system''s page allocation algorithm can give physically addressed caches the speed of virtually addressed caches. Colored page allocation reduces the number of bits that need to be translated before cache access, allowing large low-associativity caches to be indexed before address translation, which reduces the latency to the processor. The colored allocation also has other benefits: caches miss less (in general) and more uniformly, and the inclusion principle holds for second level caches with less associativity. However, the colored allocation requires main memory partitioning, and more common bits for shared virtual addresses. Simulation results show high non-uniformity of cache miss rates for normal allocation. Analysis demonstrates the extent of second-level cache inclusion, and the reduction in effective main-memory due to partitioning.