ACM Computing Surveys (CSUR)
Exploiting data parallelism in signal processing on a dataflow machine
ISCA '89 Proceedings of the 16th annual international symposium on Computer architecture
The Hughes Data Flow Multiprocessor: architecture for efficient signal and data processing
ISCA '85 Proceedings of the 12th annual international symposium on Computer architecture
The VAL Language: Description and Analysis
ACM Transactions on Programming Languages and Systems (TOPLAS)
A program form based on data dependency in predicate regions
POPL '83 Proceedings of the 10th ACM SIGACT-SIGPLAN symposium on Principles of programming languages
A History of Data-Flow Languages
IEEE Annals of the History of Computing
Modeling the Weather with a Data Flow Supercomputer
IEEE Transactions on Computers
Hi-index | 0.02 |
The data flow concept of computation seeks to achieve high performance by allowing concurrent execution of instructions based on the availability of data. This thesis explores the translation of a subset of the high level languages VAL to data flow graphs. The major problem in performing this translation for the target machine, the Dennis-Misunas data flow computer, stems from the restriction that graph execution sequences place at most one value on any given arc at any time. The data / acknowledge are pair transformation is introduced as a means of implementing this required operational behavior. Its effect on data flow graph operation is subsequently explored as it relates to correctness and performance. Through the arc transformation enables graphs to be executed without the possibility of deadlock, the resulting overhead and the potential loss of some concurrency represent significant costs. Two techniques aimed at minimizing these problems are developed for optimizing transformed graphs. The optimization to eliminate unneeded acknowledge arcs analyzes VAL constructs to identify arc pairs which permit removal of their acknowledge arc. The optimization to balance token flow specifies a method of inserting identity operators into a graph for the purpose of pipelining input sets, and thereby increasing graph throughput. Though developed within the context noted, the translation and optimization issues described should prove applicable to other data flow architectures.