Interconnection networks for large-scale parallel processing: theory and case studies
Interconnection networks for large-scale parallel processing: theory and case studies
The Gamma network: A multiprocessor interconnection network with redundant paths
ISCA '82 Proceedings of the 9th annual symposium on Computer Architecture
Microprocessor implementation of a parallel processor
ISCA '77 Proceedings of the 4th annual symposium on Computer architecture
Notes on Shuffle/Exchange-Type Switching Networks
IEEE Transactions on Computers
The Extra Stage Cube: A Fault-Tolerant Interconnection Network for Supersystems
IEEE Transactions on Computers
The Indirect Binary n-Cube Microprocessor Array
IEEE Transactions on Computers
Access and Alignment of Data in an Array Processor
IEEE Transactions on Computers
Routing Schemes for the Augmented Data Manipulator Network in an MIMD System
IEEE Transactions on Computers
PASM: A Partitionable SIMD/MIMD System for Image Processing and Pattern Recognition
IEEE Transactions on Computers
A Parallel Processor Operating System Comparison
IEEE Transactions on Software Engineering
Organization of the TRAC processor-memory subsystem
AFIPS '80 Proceedings of the May 19-22, 1980, national computer conference
An overview of the Texas reconfigurable array computer
AFIPS '80 Proceedings of the May 19-22, 1980, national computer conference
Determining an Optimal Secondary Storage Service Rate for the PASM Control System
IEEE Transactions on Computers
Hi-index | 14.98 |
One class of reconfigurable parallel processing systems is based on the use of a large number of processing elements where each processing element consists of a processor and a primary memory. To efficiently employ the processing elements, it is desirable to overlap the operation of the secondary storage with computations being performed by the processors. Due to the dynamically reconfigurable architecture of such systems, the processors which will execute a new task may not be selected until they are ready to run the task. That is, a task must be preloaded prior to the final selection of the processors on which it will execute. Two schemes which allow for the secondary storage to preload input data and programs into the primary memories so that processor utilization can be increased and system response time decreased are presented. PASM is used as an example system for comparing the performance of the schemes by simulation studies. Results show that both methods are effective techniques. These schemes can be applied to reconfigurable parallel processing systems which use a centralized scheduling policy.