A nine year study of file system and storage benchmarking
ACM Transactions on Storage (TOS)
GIGA+: scalable directories for shared file systems
PDSW '07 Proceedings of the 2nd international workshop on Petascale data storage: held in conjunction with Supercomputing '07
Towards realistic file-system benchmarks with CodeMRI
ACM SIGMETRICS Performance Evaluation Review
Analyzing Metadata Performance in Distributed File Systems
PaCT '09 Proceedings of the 10th International Conference on Parallel Computing Technologies
pNFS, POSIX, and MPI-IO: a tale of three semantics
Proceedings of the 4th Annual Workshop on Petascale Data Storage
A load balancing framework for clustered storage systems
HiPC'08 Proceedings of the 15th international conference on High performance computing
A transparently-scalable metadata service for the Ursa Minor storage system
USENIXATC'10 Proceedings of the 2010 USENIX conference on USENIX annual technical conference
Using filesystem virtualization to avoid metadata bottlenecks
Proceedings of the Conference on Design, Automation and Test in Europe
A little language for rapidly constructing automated performance tests
Proceedings of the 2nd ACM/SPEC International Conference on Performance engineering
DMetabench--a metadata benchmark for distributed file systems
The Journal of Supercomputing
File system virtual appliances: Portable file system implementations
ACM Transactions on Storage (TOS)
Systems research and innovation in data ONTAP
ACM SIGOPS Operating Systems Review
Glitz: cross-vendor federated file systems
ACM SIGOPS Operating Systems Review
Zone-based data striping for cloud storage
IBM Journal of Research and Development
Hi-index | 0.00 |
Data ONTAP GX is a clustered Network Attached File server composed of a number of cooperating filers. Each filer manages its own local file system, which consists of a number of disconnected flexible volumes. A separate namespace infrastructure runs within the cluster, which connects the volumes into one or more namespaces by means of internal junctions. The cluster collectively exposes a potentially large number of separate virtual servers, each with its own independent namespace, security and administrative domain. The cluster implements a protocol routing and translation layer which translates requests in all incoming file protocols into a single unified internal file access protocol called SpinNP. The translated requests are then forwarded to the correct filer within the cluster for servicing by the local file system instance. This provides data location transparency, which is used to support transparent data migration, load balancing, mirroring for load sharing and data protection, and fault tolerance. The cluster itself greatly simplifies the administration of a large number of filers by consolidating them into a single system image. Results from benchmarks (over one million file operations per second on a 24 node cluster) and customer experience demonstrate linear scaling.