SPIFFI-A Scalable Parallel File System for the Intel Paragon
IEEE Transactions on Parallel and Distributed Systems
PVFS: a parallel file system for linux clusters
ALS'00 Proceedings of the 4th annual Linux Showcase & Conference - Volume 4
Implementing MPI-IO Atomic Mode and Shared File Pointers Using MPI One-Sided Communication
International Journal of High Performance Computing Applications
An efficient format for nearly constant-time access to arbitrary time intervals in large trace files
Scientific Programming - Large-Scale Programming Tools and Environments
I/O performance challenges at leadership scale
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Middleware support for many-task computing
Cluster Computing
Optimization Techniques at the I/O Forwarding Layer
CLUSTER '10 Proceedings of the 2010 IEEE International Conference on Cluster Computing
On the usability of the MPI shared file pointer routines
EuroMPI'12 Proceedings of the 19th European conference on Recent Advances in the Message Passing Interface
Hi-index | 0.00 |
While the I/O functions described in the MPI standard included shared file pointer support from the beginning, the performance and portability of these functions have been subpar at best. ROMIO [1], which provides the MPI-IO functionality for most MPI libraries, to this day uses a separate file to manage the shared file pointer. This file provides the shared location that holds the current value of the shared file pointer. Unfortunately, each access to the shared file pointer involves file lock management and updates to the file contents. Furthermore, support for shared file pointers is not universally available because few file systems support native shared file pointers [5] and a few file systems do not support file locks [3]. Application developers rarely use shared file pointers, even though many applications can benefit from this file I/O capability. These applications are typically loosely coupled and rarely exhibit application-wide synchronization. Examples include application tracing toolkits [8,4] and many-task computing applications [10]. Other approaches to the shared file pointer I/O models frequently used by these application classes include file-per-process, file-per-thread, and file-per-rank approaches. While these approaches work relatively well at smaller scales, they fail to scale to leadership-class computing systems because of the intense metadata loads generated they generate. Recent research identified significant improvements from using shared-file I/O instead of multifile I/O patterns on leadership-class systems [6].