Automatic memory partitioning and scheduling for throughput and power optimization

  • Authors:
  • Jason Cong;Wei Jiang;Bin Liu;Yi Zou

  • Affiliations:
  • University of California, Los Angeles, CA;University of California, Los Angeles, CA;University of California, Los Angeles, CA;University of California, Los Angeles, CA

  • Venue:
  • Proceedings of the 2009 International Conference on Computer-Aided Design
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Hardware acceleration is crucial in modern embedded system design to meet the explosive demands on performance and cost. Selected computation kernels for acceleration are usually captured by nest loops, which are optimized by state-of-the-art techniques like loop tiling and loop pipelining. However, memory bandwidth bottlenecks prevent designs to reach optimal throughput with respect to available parallelism. In this paper we present an automatic memory partitioning technique which can efficiently improve throughput and reduce energy consumption of pipelined loop kernels for given throughput constraints and platform requirement. Our partition scheme consists of two steps, the first step considers cycle accurate scheduling information to meet the hard constraints on memory bandwidth requirements specifically for synchronized hardware designs. Experimental results show an average 6X throughput improvement on a set of real world designs with moderate area increase (about 45% on average), given that less resource sharing opportunities exist with higher throughput in optimized designs. The second step further partitions the memory banks for reducing the dynamic power consumption of the final design. In contrast with previous approaches, our technique can statically compute memory access frequencies in polynomial time with little to none profiling. Experimental results show about 30% power reduction on the same set of benchmarks.