Synchronized Disk Interleaving
IEEE Transactions on Computers
Multi-disk management algorithms
SIGMETRICS '87 Proceedings of the 1987 ACM SIGMETRICS conference on Measurement and modeling of computer systems
A case for redundant arrays of inexpensive disks (RAID)
SIGMOD '88 Proceedings of the 1988 ACM SIGMOD international conference on Management of data
C3P Proceedings of the third conference on Hypercube concurrent computers and applications: Architecture, software, computer systems, and general issues - Volume 1
File concepts for parallel I/O
Proceedings of the 1989 ACM/IEEE conference on Supercomputing
An Evaluation of Multiple-Disk I/O Systems
IEEE Transactions on Computers
Design, Analysis, and Simulation of I/O Architectures for Hypercube Multiprocessors
IEEE Transactions on Parallel and Distributed Systems
Prefetching in File Systems for MIMD Multiprocessors
IEEE Transactions on Parallel and Distributed Systems
Proceedings of the Second International Conference on Data Engineering
Performance Measurement of Two Parallel File Systems
Performance Measurement of Two Parallel File Systems
An efficient abstract interface for multidimensional array I/O
Proceedings of the 1994 ACM/IEEE conference on Supercomputing
Physical schemas for large multidimensional arrays in scientific computing applications
SSDBM'1994 Proceedings of the 7th international conference on Scientific and Statistical Database Management
Hi-index | 0.00 |
The Intel Concurrent File System (CFS) for the iPSC/2 hypercube is one of the first production file systems to utilize the declustering of large files across numbers of disks to improve I/O performance. The CFS also makes use of dedicated I/O nodes, operating asynchronously, which provide file caching and prefetching. Processing of I/O requests is distributed between the compute node that initiates the request and the I/O nodes that service the request. The effects of the various design decisions in the Intel CFS are difficult to determine without measurements of an actual system. We present performance measurements of the CFS for a hypercube with 32 compute nodes and four I/0 nodes (four disks). Measurement of read/write rates for one compute node to one I/O node, one compute node to multiple I/O nodes, and multiple compute nodes to multiple I/O nodes form the basis for the study. Additional measurements show the effects of different buffer sizes, caching, prefetching, and file preallocation on system performance.