Optimizing noncontiguous accesses in MPI – IO
Parallel Computing
MPI-IO/GPFS, an optimized implementation of MPI-IO on top of GPFS
Proceedings of the 2001 ACM/IEEE conference on Supercomputing
GPFS: A Shared-Disk File System for Large Computing Clusters
FAST '02 Proceedings of the Conference on File and Storage Technologies
Integrating collective I/O and cooperative caching into the "clusterfile" parallel file system
Proceedings of the 18th annual international conference on Supercomputing
Exploiting Lustre File Joining for Effective Collective IO
CCGRID '07 Proceedings of the Seventh IEEE International Symposium on Cluster Computing and the Grid
Disk-directed I/O for MIMD multiprocessors
OSDI '94 Proceedings of the 1st USENIX conference on Operating Systems Design and Implementation
Collective caching: application-aware client-side file caching
HPDC '05 Proceedings of the High Performance Distributed Computing, 2005. HPDC-14. Proceedings. 14th IEEE International Symposium
View-Based Collective I/O for MPI-IO
CCGRID '08 Proceedings of the 2008 Eighth IEEE International Symposium on Cluster Computing and the Grid
Cooperative write-behind data buffering for MPI i/o
PVM/MPI'05 Proceedings of the 12th European PVM/MPI users' group conference on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Hi-index | 0.00 |
This paper presents an implementation of the MPI-IO interface for GPFS inside ROMIO distribution. The experimental section presents a performance comparison among three collective I/O implementations: two-phase I/O, the default file system independent method of ROMIO, view-based I/O, a file system-independent method we developed in a previous work and a GPFS specific collective I/O implementation based on data-shipping. The results show that data shipping-based collective I/O performs better for writing, while view-based I/O for reading.