An adaptive, non-uniform cache structure for wire-delay dominated on-chip caches
Proceedings of the 10th international conference on Architectural support for programming languages and operating systems
Victim Replication: Maximizing Capacity while Hiding Wire Delay in Tiled Chip Multiprocessors
Proceedings of the 32nd annual international symposium on Computer Architecture
Optimizing Replication, Communication, and Capacity Allocation in CMPs
Proceedings of the 32nd annual international symposium on Computer Architecture
Cooperative Caching for Chip Multiprocessors
Proceedings of the 33rd annual international symposium on Computer Architecture
Proximity-aware directory-based coherence for multi-core processor architectures
Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures
An Adaptive Shared/Private NUCA Cache Partitioning Scheme for Chip Multiprocessors
HPCA '07 Proceedings of the 2007 IEEE 13th International Symposium on High Performance Computer Architecture
Reactive NUCA: near-optimal block placement and replication in distributed caches
Proceedings of the 36th annual international symposium on Computer architecture
SOS: A Software-Oriented Distributed Shared Cache Management Approach for Chip Multiprocessors
PACT '09 Proceedings of the 2009 18th International Conference on Parallel Architectures and Compilation Techniques
A NUCA Substrate for Flexible CMP Cache Sharing
IEEE Transactions on Parallel and Distributed Systems
Hi-index | 0.00 |
As chip multiprocessor systems incorporate an increasing number of cores, memory access latency, impacted by on-chip communication and remote data cache access, is becoming a critical bottleneck. To combat the problem, advanced cache organizations have been proposed as alternatives to traditional private and static non-uniform cache access (e.g. distributed shared) architectures. In this paper, we demonstrate how using fairly simple compiler analysis memory accesses can be classified into private data access and shared data access. In addition, we introduce a third classification of probably private access and demonstrate the impact of this category compared to traditional private and shared. The memory access classification information from the compiler analysis is then provided to the runtime system through the page table to facilitate a hybrid private-shared caching technique. The proposed cache mechanism distinguishes data access patterns and adopts different placement and search policies accordingly to improve the performance. Our analysis demonstrates that many applications have a significant amount of both private and shared data and that compiler analysis can identify the private data effectively for many applications.