Improving power efficiency with compiler-assisted cache replacement

  • Authors:
  • Hongbo Yang;R. Govindarajan;Guang R. Gao;Ziang Hu

  • Affiliations:
  • Sandbridge Technologies Inc., 1 N. Lexington Ave, White Plains, NY 10601, USA (Corresponding author. E-mail: hyang@sandbridgetech.com);Supercomputer Education and Research Centre and the Department of Computer Science and Automation, Indian Institute of Science, Banglore, India;Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA;Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA

  • Venue:
  • Journal of Embedded Computing - Cache exploitation in embedded systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Data cache in embedded systems plays the roles of both speeding up program execution and reducing power consumption. However, a hardware-only cache management scheme usually results in unsatisfactory cache utilization. In several new architectures, cache management details are accessible at instruction level, enabling the involvement of compiler for better cache performance. In particular, Intel XScale implemented the cache-locking mechanism, which enables the compiler to lock certain critical data in the cache and it is guaranteed that the locked data will not be evicted from the cache. In such an architecture, what-to-lock and when-to-lock are important issues to achieve good cache performance. To this end, this paper gives a 0/1 knapsack problem formulation, which can be efficiently solved using a dynamic programming algorithm. We implemented this formulation in the MIPSpro compiler and our approach reduces both execution time and power consumption. The measured power and performance on an Xscale processor show that our method can achieve improved execution time than data prefetching at similar or reduced power consumption.