Memory management challenges in the power-aware computing era

  • Authors:
  • Avi Mendelson

  • Affiliations:
  • Intel Corporation, Israel Design Center

  • Venue:
  • Proceedings of the 5th international symposium on Memory management
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Process technology has been driving the computer architecture industry during the last two decades. Until recently, most of the micro-architectures were focused on achieving best performance, usually for a single threaded application, within a given budget of transistors. Recently, power consumption and power density start to be an important factor in the design of new processors. This new trend, presents new challenges for both the hardware developer community as well as for the software community.Power consumption can be divided into two components: static and dynamic. The static power, also known as leakage power, is the power consumed when the logic or the memory circuits are not in used while the dynamic power represents the power consumed when the logic or memory circuits are active. In the past, the static power consumption was negligible and so deserves no special treatment. As the size of transistor shrinks, static power starts to be more significant and under some usage models it can even governs the overall power consumption of the system. In order to control both static and dynamic power consumption, different techniques were proposed, such as the use of advanced circuits that were optimized for low power (this is out of the scope of my presentation), the use of advance power management techniques, uses of new computer architectures and more. This presentation will be focused on power management in general and on power management of memory subsystem in particular.Improving the power consumption of the memory subsystem has been very active research and development area during the last few years. At the micro-architecture level, different methods have been proposed; e.g., Drowsy cache [1,2] calls to lower the power consumption of parts of the memory when not expected to be used in the near future. The power saving is achieved in the cost of increasing the access time to those parts of the memory if the prediction was incorrect. The Decay caches is another technique that calls to farther save power of "un-used" memory by "cutting the power" to these cells on the cost of loosing their content[3]. Intel announced lately a new technique called "smart memory control" that combines the power management mechanism together with leakage control of the memory [4]. Few other works, such as [5] suggest combining software techniques with hint the hardware what memory is needed and even to compress the data and the instruction in order to reduce the footprint of the program [6].Another important aspect of the power crisis on memory management is the intensive usage of parallel system. While in the past, fast improvement in performance was achieved by accelerating the speed of the processor, when power consumption and power density limitations are considered, the improvement in frequency should be limited and so the "natural" way to keep performance improvement at the same pace is to use parallel execution [7]. Adding more processors to the system requires increasing number of levels in the memory hierarchy and the size of each of them. Smart management of such a complicated memory subsystem brings-up new research opportunities such as how to balance the usage of shared resources and how to reduce their average power consumption.My presentation will have three parts: (1) the power crisis; what causes it and what are the current trends to handle it, (2) power management mechanisms, at the various levels; hardware, compiler, and operating system and (3) the new multicore architectures and their advanced memory hierarchy. For each of these issues I will discuss the main technology challenges together with current development and research directions.