Rate-based QoS techniques for cache/memory in CMP platforms

  • Authors:
  • Andrew Herdrich;Ramesh Illikkal;Ravi Iyer;Don Newell;Vineet Chadha;Jaideep Moses

  • Affiliations:
  • Intel Corporation, Hillsboro, OR, USA;Intel Corporation, Hillsboro, OR, USA;Intel Corporation, Hillsboro, OR, USA;Intel Corporation, Hillsboro, OR, USA;Intel Corporation, Hillsboro, OR, USA;Intel Corporation, Hillsboro, OR, USA

  • Venue:
  • Proceedings of the 23rd international conference on Supercomputing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

As we embrace the era of chip multi-processors (CMP), we are faced with two major architectural challenges: (i) QoS or performance management of disparate applications running on CPU cores contending for shared cache/memory resources and (ii) global/local power management techniques to stay within the overall platform constraints. The problem is exacerbated as the number of cores sharing the resources in a chip increase. In the past, researchers have proposed independent solutions for these two problems. In this paper, we show that rate-based techniques that are employed to address power management can be adapted to address cache/memory QoS issues. The basic approach is to throttle down the processing rate of a core if it is running a low-priority task and its execution is interfering with the performance of a high priority task due to platform resource contention (i.e. cache or memory contention). We evaluate two rate throttling mechanisms (clock modulation, and frequency scaling) for effectively managing the interference between applications running in a CMP platform and delivering QoS/performance management. We show that clock modulation is much more applicable to cache/memory QoS than frequency scaling and that resource monitoring along with rate control provides effective power-performance management in CMP platforms.