Low-power inter-core communication through cache partitioning in embedded multiprocessors

  • Authors:
  • Chenjie Yu;Xiangrong Zhou;Peter Petrov

  • Affiliations:
  • University of Maryland, College Park;University of Hawaii, Manoa;University of Maryland, College Park

  • Venue:
  • Proceedings of the 22nd Annual Symposium on Integrated Circuits and System Design: Chip on the Dunes
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an application-driven customization methodology for energy-efficient inter-core communication in embedded multiprocessors. The methodology leverages configurable cache architectures and integrates software and hardware support to achieve energy-efficient data sharing between producer and consumer tasks. The technique is especially beneficial for data-streaming applications exploiting pipeline parallelism where computational phases are mapped to separate processor cores. The application-driven data cache partitioning achieves low-power and low-latency (no coherence misses) inter-core data sharing. The basic premise of the proposed technique is to separate through cache partitioning the private data from the several shared data buffers used by each producer/consumer task. Such partitioning will result in the following benefits: 1) Data cache accesses caused by the processor and the coherence mechanism will need to access only a cache partition instead of the entire cache structure, resulting in significant power reductions; 2) Interference (caused by both processor and coherence activities) across private data and the several shared data buffers is eliminated - this in turn enables the efficient implementation of application-driven remote cache updates at synchronization boundaries.