Parallel I/O for high performance computing
Parallel I/O for high performance computing
Data Sieving and Collective I/O in ROMIO
FRONTIERS '99 Proceedings of the The 7th Symposium on the Frontiers of Massively Parallel Computation
Exploiting Lustre File Joining for Effective Collective IO
CCGRID '07 Proceedings of the Seventh IEEE International Symposium on Cluster Computing and the Grid
Implementing MPI-IO Atomic Mode and Shared File Pointers Using MPI One-Sided Communication
International Journal of High Performance Computing Applications
Timestamp synchronization for event traces of large-scale message-passing applications
PVM/MPI'07 Proceedings of the 14th European conference on Recent Advances in Parallel Virtual Machine and Message Passing Interface
OMPIO: a modular software architecture for MPI I/O
EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface
On the usability of the MPI shared file pointer routines
EuroMPI'12 Proceedings of the 19th European conference on Recent Advances in the Message Passing Interface
Hi-index | 0.00 |
MPI-I/O is a part of the MPI-2 specification defining file I/O operations for parallel MPI applications. Compared to regular POSIX style I/O functions, MPI I/O offers features like the distinction between individual file pointers on a per-process basis and a shared file pointer across a group of processes. The objective of this study is the evaluation of various algorithms of shared file pointer operations for MPI-I/O. We present three algorithms to provide shared file pointer operations on file systems that do not support file locking. The evaluation of the algorithms is carried out utilizing a parallel PVFS2 file system on an InfiniBand cluster and a local ext3 file system using a 8-core SMP.