Improving TriMedia cache performance by profile guided code reordering

  • Authors:
  • Norbert Esser;Renga Sundararajan;Joachim Trescher

  • Affiliations:
  • NXP Semiconductors, San Jose, CA;NXP Semiconductors, San Jose, CA;NXP Research, Eindhoven, The Netherlands

  • Venue:
  • SAMOS'07 Proceedings of the 7th international conference on Embedded computer systems: architectures, modeling, and simulation
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

There is an ever-increasing gap between memory and processor performance. As a result, exploiting the cache becomes increasingly important, especially for embedded systems where cache sizes are much smaller than that of general purpose processors. The fine-tuning of an application with respect to cache behavior is now largely dependent on the skill of the application programmer. Given the difficulty of predicting cache behavior, this is, even when great skill is applied, a cumbersome task. A wide range of approaches, in hardware as well as in software, can be used to relieve the programmer's burden. On the hardware side, we can experiment, for example, with cache sizes, line sizes, replacement policies, and cache organization. On the software side, we can use various optimization techniques like software pipelining, branch prediction, and code reordering. The research described in this paper focussed on improving performance by using code reordering techniques. This paper reports on the work that we have done to reduce the number of line-fetches in the instruction cache. We have extended the functionality of the linker in the TriMedia compiler chain, such that the number of fetches during program execution is reduced. By reordering the code, we ensure that hot code stays in the cache and the cache is not polluted with cold code. Because fewer fetches are needed we expect a performance increase. By analyzing and profiling code, we obtain execution statistics that can help us find better code-allocations.