Address Code and Arithmetic Optimizations for Embedded Systems

  • Authors:
  • J. Ramanujam;Satish Krishnamurthy;Jinpyo Hong;Mahmut Kandemir

  • Affiliations:
  • Department of Electrical and Computer Engineering, Louisiana State University, Baton Rouge, LA;Department of Electrical and Computer Engineering, Louisiana State University, Baton Rouge, LA;Department of Electrical and Computer Engineering, Louisiana State University, Baton Rouge, LA;Department of Computer Science and Engineering, The Pennsylvania State University, University Park, PA

  • Venue:
  • ASP-DAC '02 Proceedings of the 2002 Asia and South Pacific Design Automation Conference
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

An important class of problems used widely in both the embedded systems and scientific domains perform memory intensive computations on large data sets. These data sets get to be typically stored in main memory, which means that the compiler needs to generate the address of a memory location in order to store these data elements and generate the same address again when they are subsequently retrieved. This memory address computation is quite expensive, and if it is not performed efficiently, the performance degrades significantly. In this paper, we have developed a new compiler approach for optimizing the memory performance of subscripted or array variables and their address generation in stencil problems that are common in embedded image processing and other applications. Our approach makes use of the observation that in all these stencils, most of the elements accessed are stored close to one other in memory. We try to optimize the stencil codes with a view of reducing both the arithmetic and the address computation overhead. The regularity of the access pattern and the reuse of data elements between successive iterations of the loop body means that there is a common sub-expression between any two successive iterations; these common sub-expressions are difficult to detect using state-of-the-art compiler technology. If we were to store the value of the common sub-expression in a scalar, then for the next iteration, the value in this scalar could be used instead of performing the computation all over again. This greatly reduces the arithmetic overhead. Since we store only one scalar in a register, there is almost no register pressure. Also all array accesses are now replaced by pointer dereferences, where the pointers are incremented after each iteration. This reduces the address computation overhead. Our solution is the only one so far to exploit both scalar conversion and common sub-expressions. Extensive experimental results on several codes show that our approach performs better than the other approaches.