Reducing energy in instruction caches by using multiple line buffers with prediction

  • Authors:
  • Kashif Ali;Mokhtar Aboelaze;Suprakash Datta

  • Affiliations:
  • School of Computing, Queens University, Kingston on Canada;Department of Computer Science and Engineering, York University, Toronto on Canada;Department of Computer Science and Engineering, York University, Toronto on Canada

  • Venue:
  • ISHPC'05/ALPS'06 Proceedings of the 6th international symposium on high-performance computing and 1st international conference on Advanced low power systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Energy consumption plays a crucial role in the design of embedded processors especially for portable devices. Since memory access consumes a significant portion of the energy of a processor, the design of fast low-energy caches has become a very important aspect of modern processor design. In this paper, we present a novel cache architecture for reduced energy instruction caches. Our proposed cache architecture consists of the L1 cache, multiple line buffers, and a prediction mechanism to predict which line buffer, or L1 cache to access next. We used simulation to evaluate our proposed architecture and compare it with the HotSpot cache, Filter cache, Predictive line buffer cache and Way-Halting cache. Simulation results show that our approach can reduce instruction cache energy consumption, on average, by 75% without sacrificing performance.