Distance Associativity for High-Performance Energy-Efficient Non-Uniform Cache Architectures
Proceedings of the 36th annual IEEE/ACM International Symposium on Microarchitecture
Managing Wire Delay in Large Chip-Multiprocessor Caches
Proceedings of the 37th annual IEEE/ACM International Symposium on Microarchitecture
A NUCA substrate for flexible CMP cache sharing
Proceedings of the 19th annual international conference on Supercomputing
Introduction to the cell multiprocessor
IBM Journal of Research and Development - POWER5 and packaging
CellSs: a programming model for the cell BE architecture
Proceedings of the 2006 ACM/IEEE conference on Supercomputing
Analysis of static and dynamic energy consumption in NUCA caches: initial results
MEDEA '07 Proceedings of the 2007 workshop on MEmory performance: DEaling with Applications, systems and architecture
Larrabee: a many-core x86 architecture for visual computing
ACM SIGGRAPH 2008 papers
On the simulation of large-scale architectures using multiple application abstraction levels
ACM Transactions on Architecture and Code Optimization (TACO) - HIPEAC Papers
Hi-index | 0.00 |
The emergence of hardware accelerators, such as graphics processing units (GPUs), has challenged the interaction between processing elements (PEs) and main memory. In architectures like the Cell/B.E. or GPUs, the PEs incorporate local memories which are fed with data transferred from memory using direct memory accesses (DMAs). We expect that chip multiprocessors (CMP) with DMA-managed local memories will become more popular in the near future due to the increasing interest in accelerators. In this work we show that, in that case, the way cache hierarchies are conceived should be revised. Particularly for last-level caches, the norm today is to use latency-aware organizations. For instance, in dynamic nonuniform cache architectures (D-NUCA) data is migrated closer to the requester processor to optimize latency. However, in DMA-based scenarios, the memory system latency becomes irrelevant compared with the time consumed for moving the DMA data, so latency-aware designs are, a priori, inefficient. In this work, we revisit the last-level cache designs in DMA-based CMP architectures with master-worker execution. Two scenarios are evaluated. First, we consider a set of private caches with data replication across them, where coherency of the copies is ensured through a hardware protocol. In this scenario, a PE has a nearby copy of the datum, improving cache access latency. Second, we consider a partitioned cache, where the allocation of a datum to a cache block is determined based on its physical address. In this scenario, there are no copies of data, and access to a datum has a variable latency. In contrast with traditional load/store-based architectures, we found that the partitioned last-level cache scheme outperforms the cache with data replication for DMA-based scenarios.