Achieving predictable performance through better memory controller placement in many-core CMPs

  • Authors:
  • Dennis Abts;Natalie D. Enright Jerger;John Kim;Dan Gibson;Mikko H. Lipasti

  • Affiliations:
  • Google Inc, Madison, WI, USA;University of Toronto, Toronto, ON, Canada;KAIST, Daejeon, South Korea;University of Wisconsin - Madison, Madison, WI, USA;University of Wisconsin - Madison, Madison, WI, USA

  • Venue:
  • Proceedings of the 36th annual international symposium on Computer architecture
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the near term, Moore's law will continue to provide an increasing number of transistors and therefore an increasing number of on-chip cores. Limited pin bandwidth prevents the integration of a large number of memory controllers on-chip. With many cores, and few memory controllers, where to locate the memory controllers in the on-chip interconnection fabric becomes an important and as yet unexplored question. In this paper we show how the location of the memory controllers can reduce contention (hot spots) in the on-chip fabric and lower the variance in reference latency. This in turn provides predictable performance for memory-intensive applications regardless of the processing core on which a thread is scheduled. We explore the design space of on-chip fabrics to find optimal memory controller placement relative to different topologies (i.e. mesh and torus), routing algorithms, and workloads.