An introduction to processor-time-optimal systolic arrays

  • Authors:
  • P. Cappello;Ö. Eğecioğlu;C. Scheiman

  • Affiliations:
  • Department of Computer Science, University of California, Santa Barbara, CA;Department of Computer Science, University of California, Santa Barbara, CA;Department of Computer Science, California Polytechnic University, San Louis Obisbo, CA

  • Venue:
  • Highly parallel computaions
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider computations suitable for systolic arrays, often called regular array computations or systems of uniform recurrence relations. In such computations, the tasks to be computed are viewed as the nodes of a directed acyclic graph (dag), where the data dependencies are represented as arcs. A processor-time-minimal schedule measures the minimum number of processors needed to extract the maximum parallelism from the dag. We present a technique for finding a lower bound on the number of processors needed to achieve a given schedule of an algorithm represented as a dag. The application of this technique is illustrated with a tensor product computation. We then consider the free schedule of algorithms for matrix product, Gaussian elimination, and transitive closure. For each problem, we provide a time-minimal processor schedule that meets the computed processor lower bounds, including the one for tensor product.