Decode filter cache for energy efficient instruction cache hierarchy in super scalar architectures

  • Authors:
  • Kugan Vivekanandarajah;Thambipillai Srikanthan;Saurav Bhattacharyya

  • Affiliations:
  • Nanyang Technological University, Singapore;Nanyang Technological University, Singapore;Nanyang Technological University, Singapore

  • Venue:
  • Proceedings of the 2004 Asia and South Pacific Design Automation Conference
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

The power consumption of microprocessors has been increasing in step with the complexity of each progressive generation. In general purpose processors, this is primarily attributed to the high energy consumption of fetch and decode circuitry, pursuant to the high instruction issue rate required of these high performance processors. Predictive Decode Filter Cache (DFC) has been shown to be effective in reducing the fetch and decode energy consumed by the instruction cache hierarchy of in order single issue processors. In this paper we propose the architectural level enhancements to facilitate the incorporation of the DFC in wide issue superscalar processors for an energy efficient memory hierarchy. Extensive simulations on the modified superscalar architecture shows that the use of the (predictor based) DFC results in an average reduction of 17.33% and 25.09% fetch energy reduction in L1 cache along with 37.2% and 46.6% reduction in number of decodes for 64 and 128 instruction DFC respectively. This fetch and decode energy savings are achieved with minimal reduction in the average Instruction Per Cycle (IPC) of 0.54% and 0.73% for 64 and 128 instruction DFC for the selected set of spec2000 benchmarks.