Cooperative cache partitioning for chip multiprocessors
Proceedings of the 21st annual international conference on Supercomputing
FlexDCP: a QoS framework for CMP architectures
ACM SIGOPS Operating Systems Review
MLP-aware dynamic cache partitioning
HiPEAC'08 Proceedings of the 3rd international conference on High performance embedded architectures and compilers
Load balancing using dynamic cache allocation
Proceedings of the 7th ACM international conference on Computing frontiers
Dynamic cache partitioning based on the MLP of cache misses
Transactions on high-performance embedded architectures and compilers III
Throttling capacity sharing in private L2 caches of CMPs
Proceedings of the 2011 ACM Symposium on Research in Applied Computation
Scalable shared-cache management by containing thrashing workloads
HiPEAC'10 Proceedings of the 5th international conference on High Performance Embedded Architectures and Compilers
Hi-index | 0.00 |
Cache Partitioning has been proposed as an interesting alternative to traditional eviction policies of shared cache levels in modern CMP architectures: throughput is improved at the expense of a reasonable cost. However, these new policies present different behaviors depending on the applications that are running in the architecture. In this paper, we introduce some metrics that characterize applications and allow us to give a clear and simple model to explain final throughput speed ups.