An extended two-phase method for accessing sections of out-of-core arrays
Scientific Programming
On implementing MPI-IO portably and with high performance
Proceedings of the sixth workshop on I/O in parallel and distributed systems
The implementation of MPI-2 one-sided communication for the NEC SX-5
Proceedings of the 2000 ACM/IEEE conference on Supercomputing
Noncontiguous I/O Accesses Through MPI-IO
CCGRID '03 Proceedings of the 3st International Symposium on Cluster Computing and the Grid
Noncontiguous I/O through PVFS
CLUSTER '02 Proceedings of the IEEE International Conference on Cluster Computing
Data Sieving and Collective I/O in ROMIO
FRONTIERS '99 Proceedings of the The 7th Symposium on the Frontiers of Massively Parallel Computation
HPDC '96 Proceedings of the 5th IEEE International Symposium on High Performance Distributed Computing
PVFS: a parallel file system for linux clusters
ALS'00 Proceedings of the 4th annual Linux Showcase & Conference - Volume 4
Journal of Scientific Computing
I/O performance challenges at leadership scale
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Evaluating I/O characteristics and methods for storing structured scientific data
IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
Hi-index | 0.00 |
Modern data-intensive structured datasets constantly undergo manipulation and migration through parallel scientific applications. Directly supporting these time-consuming operations is an important step in providing high-performance I/O solutions for modern large-scale applications. High-level interfaces such as HDF5 and parallel netCDF provide convenient APIs for accessing structured datasets, and the MPI IO interface also supports efficient access to structured data. Parallel file systems do not traditionally support such structured access from these higher level interfaces. In this work, we present two contributions. First, we demonstrate an implementation of structured data access support in the context of the Parallel Virtual File System (PVFS). We call this support 'datatype I/O' because of its similarity to MPI datatypes. This support is built with a reusable datatype-processing component from the MPICH2 MPI implementation. The second contribution of this work is a comparison of I/O characteristics of modern high-performance noncontiguous I/O methods.We use our I/O characteristics comparison to assess all the methods using three test applications. We also point to further optimisations that could be leveraged for even more efficient operation.