Local Microcode Compaction Techniques
ACM Computing Surveys (CSUR)
A comparison of list schedules for parallel processing systems
Communications of the ACM
DDDP-a Distributed Data Driven Processor
ISCA '83 Proceedings of the 10th annual international symposium on Computer architecture
Decoupled access/execute computer architectures
ISCA '82 Proceedings of the 9th annual symposium on Computer Architecture
Simac: a simple multiple alu computer
Simac: a simple multiple alu computer
The Piecewise Data Flow Architecture: Architectural Concepts
IEEE Transactions on Computers
A Practical Data Flow Computer
Computer
A Survey of Interconnection Networks
Computer
Microcode compaction: looking backward and looking forward
AFIPS '81 Proceedings of the May 4-7, 1981, national computer conference
Bounds on the Number of Processors and Time for Multiprocessor Optimal Schedules
IEEE Transactions on Computers
Programming the Loral LDF 100 dataflow machine
ACM SIGPLAN Notices
Processor Utilization in a Linearly Connected Parallel Processing System
IEEE Transactions on Computers
Models of machines and computation for mapping in multicomputers
ACM Computing Surveys (CSUR)
VLSI Architectures for Neural Networks
IEEE Micro
A Generalized Scheme for Mapping Parallel Algorithms
IEEE Transactions on Parallel and Distributed Systems
Hi-index | 14.98 |
A low-level parallel processor (LLPP) is one in which two or more machine-level operations are executed in parallel. This paper analyzes the use of linearly connected LLPP's for parallel evaluation of program fragments. A graph-theoretic model is presented which describes the communication constraints of linearly connected parallel processors. A tight, necessary condition for finding assignments of program fragments to linearly connected LLPP's that require no communication delays is presented. Also, several weak sufficient conditions have been found and efficient heuristics for determining optimal assignments have been developed.