Modeling the benefits of mixed data and task parallelism
Proceedings of the seventh annual ACM symposium on Parallel algorithms and architectures
Approaches for Integrating Task and Data Parallelism
IEEE Concurrency
IPPS '97 Proceedings of the 11th International Symposium on Parallel Processing
Library support for hierarchical multi-processor tasks
Proceedings of the 2002 ACM/IEEE conference on Supercomputing
Advances, Applications and Performance of the Global Arrays Shared Memory Programming Toolkit
International Journal of High Performance Computing Applications
High Performance Remote Memory Access Communication: The Armci Approach
International Journal of High Performance Computing Applications
Hypergraph partitioning for automatic memory hierarchy management
Proceedings of the 2006 ACM/IEEE conference on Supercomputing
Using the GA and TAO toolkits for solving large-scale optimization problems on parallel computers
ACM Transactions on Mathematical Software (TOMS)
Data and computation abstractions for dynamic and irregular computations
HiPC'05 Proceedings of the 12th international conference on High Performance Computing
Noncollective communicator creation in MPI
EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface
Hi-index | 0.00 |
Several emerging application areas require intelligent management of distributed data and tasks that encapsulate execution units for collection of processors or processor groups. This paper describes an integration of data and task parallelism to address the needs of such applications in context of the Global Array (GA) programming model. GA provides programming interfaces for managing shared arrays based on non-partitioned global address space programming model concepts. Compatibility with MPI enables the scientific programmer to benefit from performance and productivity advantages of these high level programming abstractions using standard programming languages and compilers.