Scheduling time-constrained instructions on pipelined processors

  • Authors:
  • Allen Leung;Krishna V. Palem;Amir Pnueli

  • Affiliations:
  • New York Univ., New York, NY;New York Univ., New York, NY/ and Georgia Inst. of Technology, Atlanta;Weizmann Inst. of Science, Rehovot, Israel/ and New York Univ., New York, NY

  • Venue:
  • ACM Transactions on Programming Languages and Systems (TOPLAS)
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this work we investigate the problem of scheduling instructions on idealized microprocessors with multiple pipelines, in the presence of precedence constraints, release-times, deadlines, and latency constraints. A latency of lij specifies that there must be at least lij time-steps between the completion time of instruction i and the start time of instruction j. A latency of lij=−1 can be used to specify that j may be scheduled concurrently with i but not earlier. We present a generic algorithm that runs in O(n2logn&agr;(n)+ne) time, givenn instructions and e edges in the precedence DAG, where &agr;(n) is the functional inverse of the Ackermann function. Our algorithm can be used to construct feasible schedules for various classes of instances, including instances with the following configurations: (1) one pipeline, with individual release-times and deadlines and where the latencies between instructions are restricted to 0 and 1; (2) m pipelines, with individual release-times and deadlines, and monotone-interval order precedences; (3) two pipelines with latencies of −1 or 0, and release-times and deadlines; (4) one pipeline, latencies of 0 or 1 and individual processing times that are at least one; (5) m pipelines, intree precedences, constant latencies, and deadlines; (6) m pipelines, outtree precedences, constant latencies, and release-times. For instances with deadlines, optimal schedules that minimize the maximal tardiness can be constructed using binary search, in O(log n) iterations of our algorithm. We obtain our results using backward scheduling, a very general relaxation method, which extends, unifies, and clarifies many previous results on instruction scheduling for pipelined and parallel machines.