Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays
IEEE Transactions on Computers
Synthesis of an Optimal Family of Matrix Multiplication Algorithms on Linear Arrays
IEEE Transactions on Computers
A design methodology for synthesizing parallel algorithms and architectures
Journal of Parallel and Distributed Computing
Synthesizing Linear Array Algorithms from Nested FOR Loop Algorithms
IEEE Transactions on Computers
Dynamic programming on two-dimensional systolic arrays
Information Processing Letters
Synthesizing synchronous systems by static scheduling in space-time
Synthesizing synchronous systems by static scheduling in space-time
Dynamic programming on linear pipelines
Information Processing Letters
Specifying control signals for one-dimensional systolic arrays by uniform recurrence equations
Proceedings of the international workshop on Algorithms and parallel VLSI architectures II
The art of computer programming, volume 3: (2nd ed.) sorting and searching
The art of computer programming, volume 3: (2nd ed.) sorting and searching
A Fully-Pipelined Solutions Constructor for Dynamic Programming Problems
ICCI '91 Proceedings of the International Conference on Computing and Information: Advances in Computing and Information
Synchronizing large VLSI processor arrays
ISCA '83 Proceedings of the 10th annual international symposium on Computer architecture
Computing transitive closure on systolic arrays of fixed size
Distributed Computing
Hi-index | 0.00 |
In this paper we propose a novel way of deriving a family of fully-pipelined linear systolic algorithms for the computation of the solutions of a dynamic programming problem. In many instances, modularity is an important feature of these algorithms. One may simply add more processors to the array as the size of the problem increases. Each cell has a fixed amount of local storage α and the time delay between two consecutive cells of the array is constant. The time complexity and the number of cells in our array tend to n2 + O(n) and n2/α + O(n), respectively, as α increases. This represents the best known performance for such an algorithm.