A scheduling algorithm for optimization and early planning in high-level synthesis

  • Authors:
  • Seda Ogrenci Memik;Ryan Kastner;Elaheh Bozorgzadeh;Majid Sarrafzadeh

  • Affiliations:
  • Northwestern University, Evanston, IL;University of California, Santa Barbara, Santa Barbara, CA;University of California, Irvine, Irvine, CA;University of California, Los Angeles, Los Angeles, CA

  • Venue:
  • ACM Transactions on Design Automation of Electronic Systems (TODAES)
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Complexities of applications implemented on embedded and programmable systems grow with the advances in capacities and capabilities of these systems. Mapping applications onto them manually is becoming a very tedious task. This draws attention to using high-level synthesis within design flows. Meanwhile, it is essential to provide a flexible formulation of optimization objectives as well as to perform efficient planning for various design objectives early on in the design flow. In this work, we address these issues in the context of data flow graph (DFG) scheduling, which is an essential element within the high-level synthesis flow. We present an algorithm that schedules a chain of operations with data dependencies among consecutive operations at a single step. This local problem is repeated to generate the schedule for the whole DFG. The local problem is formulated as a maximum weight noncrossing bipartite matching. We use a technique from the computational geometry domain to solve the matching problem. This technique provides a theoretical guarantee on the solution quality for scheduling a single chain of operations. Although still being local, this provides a relatively wider perspective on the global scheduling objectives. In our experiments we compared the latencies obtained using our algorithm with the optimal latencies given by the exact solution to the integer linear programming (ILP) formulation of the problem. In 9 out of 14 DFGs tested, our algorithm found the optimal solution, while generating latencies comparable to the optimal solution in the remaining five benchmarks. The formulation of the objective function in our algorithm provides flexibility to incorporate different optimization goals. We present examples of how to exploit the versatility of our algorithm with specific examples of objective functions and experimental results on the ability of our algorithm to capture these objectives efficiently in the final schedules.