On implementing MPI-IO portably and with high performance
Proceedings of the sixth workshop on I/O in parallel and distributed systems
Parallel netCDF: A High-Performance Scientific I/O Interface
Proceedings of the 2003 ACM/IEEE conference on Supercomputing
Implementing MPI-IO atomic mode without file system support
CCGRID '05 Proceedings of the Fifth IEEE International Symposium on Cluster Computing and the Grid (CCGrid'05) - Volume 2 - Volume 02
Implementing MPI-IO Atomic Mode and Shared File Pointers Using MPI One-Sided Communication
International Journal of High Performance Computing Applications
Evaluating Algorithms for Shared File Pointer Operations in MPI I/O
ICCS '09 Proceedings of the 9th International Conference on Computational Science: Part I
OMPIO: a modular software architecture for MPI I/O
EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface
Portable and scalable MPI shared file pointers
EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface
Enabling event tracing at leadership-class scale through I/O forwarding middleware
Proceedings of the 21st international symposium on High-Performance Parallel and Distributed Computing
Optimizing I/O forwarding techniques for extreme-scale event tracing
Cluster Computing
Hi-index | 0.00 |
The MPI-2 standard defines a class of file access routines providing a shared file pointer. All processes using those routines update the same file pointer when accessing the file. Coordination between ranks happens implicitly in the MPI library, relieving the application developer of this responsibility. The shared file pointer routines, however, have found little interest from developers because of several issues ranging from routine usability and portability to performance. We consider the use of these routines in the HDF5 library, a high-level I/O library built on top of MPI, and in Vampir, a performance analysis toolkit. We highlight some of the reasons preventing their adoption and discuss how these routines could be modified to increase their usability. We also propose a novel implementation using the new MPI one-sided routines provided by the upcoming MPI-3.0 standard.