Exploiting parallelism in memory operations for code optimization

  • Authors:
  • Yunheung Paek;Junsik Choi;Jinoo Joung;Junseo Lee;Seonwook Kim

  • Affiliations:
  • School of Electrical Engineering, Seoul National University, Seoul, Korea;School of Electrical Engineering, Seoul National University, Seoul, Korea;Samsung Advanced Institute of Technology, Gyeonggi-Do, Korea;Samsung Advanced Institute of Technology, Gyeonggi-Do, Korea;Department of Electronics and Computer Engineering, Korea University, Seoul, Korea

  • Venue:
  • LCPC'04 Proceedings of the 17th international conference on Languages and Compilers for High Performance Computing
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Code size reduction is ever becoming more important for compilers targeting embedded processors because these processors are often severely limited by storage constraints and thus the reduced code size can have a positively significant impact on their performance. Various code size reduction techniques have different motivations and a variety of application contexts utilizing special hardware features of their target processors. In this work, we propose a novel technique that fully utilizes a set of hardware instructions, called the multiple load/store (MLS) or parallel load/store (PLS), that are specially featured for reducing code size by minimizing the number of memory operations in the code. To take advantage of this feature, many microprocessors support the MLS instructions, whereas no existing compilers fully exploit the potential benefit of these instructions but only use them for some limited cases. This is mainly because optimizing memory accesses with MLS instructions for general cases is an NP-hard problem that necessitates complex assignments of registers and memory offsets for variables in a stack frame. Our technique uses a couple of heuristics to efficiently handle this problem in a polynomial time bound.