In-Network Caching for Chip Multiprocessors

  • Authors:
  • Aditya Yanamandra;Mary Jane Irwin;Vijaykrishnan Narayanan;Mahmut Kandemir;Sri Hari Narayanan

  • Affiliations:
  • Department of Computer Science and Engineering, The Pennsylvania State University,;Department of Computer Science and Engineering, The Pennsylvania State University,;Department of Computer Science and Engineering, The Pennsylvania State University,;Department of Computer Science and Engineering, The Pennsylvania State University,;Department of Computer Science and Engineering, The Pennsylvania State University,

  • Venue:
  • HiPEAC '09 Proceedings of the 4th International Conference on High Performance Embedded Architectures and Compilers
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Effective management of data is critical to the performance of emerging multi-core architectures. Our analysis of applications from SpecOMP reveal that a small fraction of shared addresses correspond to a large portion of accesses. Utilizing this observation, we propose a technique that augments a router in a on-chip network with a small data store to reduce the memory access latency of the shared data. In the proposed technique, shared data from read response packets that pass through the router are cached in its data store to reduce number of hops required to service future read requests. Our limit study reveals that such caching has the potential to reduce memory access latency on an average by 27%. Further, two practical caching strategies are shown to reduce memory access latency by 14% and 17% respectively with a data store of just four entries at 2.5% area overhead.