UFO: a personal global file system based on user-level extensions to the operating system
ACM Transactions on Computer Systems (TOCS)
The Coda Distributed File System
Linux Journal
SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
GXP: An Interactive Shell for the Grid Environment
IWIA '04 Proceedings of the Innovative Architecture for Future Generation High-Performance Processors and Systems
Kernel korner: unionfs: bringing filesystems together
Linux Journal
A fast topology inference: a building block for network-aware parallel processing
Proceedings of the 16th international symposium on High performance distributed computing
High speed bulk data transfer using the SSH protocol
Proceedings of the 15th ACM Mardi Gras conference: From lightweight mash-ups to lambda grids: Understanding the spectrum of distributed computing requirements, applications, tools, infrastructures, interoperability, and the incremental adoption of key capabilities
A user-level secure grid file system
Proceedings of the 2007 ACM/IEEE conference on Supercomputing
Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
Design and implementation of GXP make - A workflow system based on make
Future Generation Computer Systems
Hi-index | 0.00 |
Developing and deploying distributed file system has been important for the Grid computing. By GMount, non-privileged users can instantaneously and effortlessly build a distributed file system on arbitrary machines that are reachable via SSH. It is scalable to hundreds of nodes in the wide-area Grid environments and adapts to NAT/Firewall. Unlike conventional distributed file systems, GMount can directly harness local file systems of each node without importing/exporting application data and utilize the network topology to make the metadata operations locality-aware. In this paper, we present the design and implementation of GMount by using two popular modules: SSH and FUSE. We demonstrate its viability and locality-aware metadata operation performance in a large scale Grid with over 320 nodes spreading across 12 clusters that are connected by heterogeneous wide-area links.