IOBENCH: a system independent IO benchmark
ACM SIGARCH Computer Architecture News
Flash code: studying astrophysical thermonuclear flashes
Computing in Science and Engineering
Data Management: NetCDF: an Interface for Scientific Data Access
IEEE Computer Graphics and Applications
MPICH2: A New Start for MPI Implementations
Proceedings of the 9th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Profile-guided I/O partitioning
ICS '03 Proceedings of the 17th annual international conference on Supercomputing
Parallel netCDF: A High-Performance Scientific I/O Interface
Proceedings of the 2003 ACM/IEEE conference on Supercomputing
Source level transformations to improve I/O data partitioning
SNAPI '03 Proceedings of the international workshop on Storage network architecture and parallel I/Os
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
PLFS: a checkpoint filesystem for parallel applications
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
...and eat it too: high read performance in write-optimized HPC I/O middleware file formats
Proceedings of the 4th Annual Workshop on Petascale Data Storage
Effective Performance Measurement at Petascale Using IPM
ICPADS '10 Proceedings of the 2010 IEEE 16th International Conference on Parallel and Distributed Systems
Hi-index | 0.00 |
Input/output (I/O) operations can represent a significant proportion of the run-time when large scientific applications are run in parallel. Although there have been advances in the form of file-format libraries, file system design and I/O hardware, a growing divergence exists between the performance of parallel file systems and compute processing rates. In this paper we utilise RIOT, an input/output tracing toolkit being developed at the University of Warwick, to assess the performance of three standard industry I/O benchmarks and mini-applications. We present a case study demonstrating the tracing and analysis capabilities of RIOT at scale, using MPI-IO, Parallel HDF-5 and MPI-IO augmented with the Parallel Log-structured File System (PLFS) middle-ware being developed by the Los Alamos National Laboratory.