Making a case for split data caches for embedded applications

  • Authors:
  • Afrin Naz;Krishna Kavi;Mehran Rezaei;Wentong Li

  • Affiliations:
  • The University of North Texas, Denton, Texas;The University of North Texas, Denton, Texas;The University of North Texas, Denton, Texas;The University of North Texas, Denton, Texas

  • Venue:
  • MEDEA '05 Proceedings of the 2005 workshop on MEmory performance: DEaling with Applications , systems and architecture
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we show that cache memories for embedded applications can be designed to increase performance while reduce area and energy consumed. Previously we have shown that separating data cache into an array cache and a scalar cache can lead to significant performance improvements for scientific benchmarks. In this paper we show that such a split data cache can also benefit embedded applications. To further improve the split cache organization, we augment the scalar cache with a small victim cache and the array cache with a small stream buffer. This "integrated" cache organization can lead to 43% reduction in the overall cache size, 37% reduction in access time and a 63% reduction in power consumption when compared to a unified 2-way set associative data cache for media benchmarks from MiBench suite.