Compiler optimizations for eliminating barrier synchronization
PPOPP '95 Proceedings of the fifth ACM SIGPLAN symposium on Principles and practice of parallel programming
A Sliding Memory Plane Array Processor
IEEE Transactions on Parallel and Distributed Systems
Efficient Execution of Parallel Applications in Multiprogrammed Multiprocessor Systems
IPPS '96 Proceedings of the 10th International Parallel Processing Symposium
Hi-index | 0.00 |
An approach for designing a hybrid parallel system that can be performed adaptively for different types of parallelism is presented. An adaptive parallel system (APS) is proposed to attain this goal. The APS is constructed by integrating tightly two different types of parallel architectures, i.e., a multiprocessor system and a memory based processor array (MPA), into a single machine. The multiprocessor and the MPA can execute medium to coarse grain parallelism and fine grain data parallelism optimally. One important feature in the APS is that the programming interface is the same as the usual subroutine call mechanism to execute data parallel code onto the MPA. Thus the existence of the MPA is transparent to the programmers. This research is to design an underlying base architecture that can be optimally executed for a broad range of applications, from coarse grain to fine grain parallelisms. Also the performance model is provided for fair comparison with other approaches. It turns out that the proposed APS can provide significant performance improvement and cost effectiveness for highly parallel applications having a mixed set of parallelisms. Keywords: parallel processing, system architecture, SIMD, multiprocessors