Server-directed collective I/O in Panda
Supercomputing '95 Proceedings of the 1995 ACM/IEEE conference on Supercomputing
Disk-directed I/O for MIMD multiprocessors
ACM Transactions on Computer Systems (TOCS)
MPI-IO/GPFS, an optimized implementation of MPI-IO on top of GPFS
Proceedings of the 2001 ACM/IEEE conference on Supercomputing
GPFS: A Shared-Disk File System for Large Computing Clusters
FAST '02 Proceedings of the Conference on File and Storage Technologies
An Abstract-Device Interface for Implementing Portable Parallel-I/O Interfaces
FRONTIERS '96 Proceedings of the 6th Symposium on the Frontiers of Massively Parallel Computation
Data Sieving and Collective I/O in ROMIO
FRONTIERS '99 Proceedings of the The 7th Symposium on the Frontiers of Massively Parallel Computation
Collective Buffering: Improving Parallel I/O Performance
HPDC '97 Proceedings of the 6th IEEE International Symposium on High Performance Distributed Computing
Improving MPI-IO Output Performance with Active Buffering Plus Threads
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
Implementing MPI-IO atomic mode without file system support
CCGRID '05 Proceedings of the Fifth IEEE International Symposium on Cluster Computing and the Grid (CCGrid'05) - Volume 2 - Volume 02
Scalable massively parallel I/O to task-local files
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
I/O performance challenges at leadership scale
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Just in time: adding value to the IO pipelines of high performance applications with JITStaging
Proceedings of the 20th international symposium on High performance distributed computing
Understanding and Improving Computational Science Storage Access through Continuous Characterization
ACM Transactions on Storage (TOS)
Improving the average response time in collective I/O
EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface
An application-level parallel I/O library for Earth system models
International Journal of High Performance Computing Applications
I/O threads to reduce checkpoint blocking for an electromagnetics solver on Blue Gene/P and Cray XK6
Proceedings of the 2nd International Workshop on Runtime and Operating Systems for Supercomputers
Optimizing fastquery performance on lustre file system
Proceedings of the 25th International Conference on Scientific and Statistical Database Management
Improving collective I/O performance by pipelining request aggregation and file access
Proceedings of the 20th European MPI Users' Group Meeting
Hi-index | 0.00 |
Collective I/O, such as that provided in MPI-IO, enables process collaboration among a group of processes for greater I/O parallelism. Its implementation involves file domain partitioning, and having the right partitioning is a key to achieving high-performance I/O. As modern parallel file systems maintain data consistency by adopting a distributed file locking mechanism to avoid centralized lock management, different locking protocols can have significant impact to the degree of parallelism of a given file domain partitioning method. In this paper, we propose dynamic file partitioning methods that adapt according to the underlying locking protocols in the parallel file systems and evaluate the performance of four partitioning methods under two locking protocols. By running multiple I/O benchmarks, our experiments demonstrate that no single partitioning guarantees the best performance. Using MPI-IO as an implementation platform, we provide guidelines to select the most appropriate partitioning methods for various I/O patterns and file systems.