Scale and performance in a distributed file system
ACM Transactions on Computer Systems (TOCS)
Disco: running commodity operating systems on scalable multiprocessors
ACM Transactions on Computer Systems (TOCS)
Frangipani: a scalable distributed file system
Proceedings of the sixteenth ACM symposium on Operating systems principles
Not all hits are created equal: cooperative proxy caching over a wide-area network
Computer Networks and ISDN Systems - Selected papers of the 3rd international caching workshop
Separating key management from file system security
Proceedings of the seventeenth ACM symposium on Operating systems principles
Fast and secure distributed read-only file system
ACM Transactions on Computer Systems (TOCS)
Venti: A New Approach to Archival Storage
FAST '02 Proceedings of the Conference on File and Storage Technologies
Memory resource management in VMware ESX server
ACM SIGOPS Operating Systems Review - OSDI '02: Proceedings of the 5th symposium on Operating systems design and implementation
The many faces of publish/subscribe
ACM Computing Surveys (CSUR)
SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
Xen and the art of virtualization
SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
Proper: privileged operations in a virtualised system environment
ATEC '05 Proceedings of the annual conference on USENIX Annual Technical Conference
Reliability and security in the CoDeeN content distribution network
ATEC '04 Proceedings of the annual conference on USENIX Annual Technical Conference
Democratizing content publication with coral
NSDI'04 Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation - Volume 1
Shark: scaling file servers via cooperative caching
NSDI'05 Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation - Volume 2
File system design for an NFS file server appliance
WTEC'94 Proceedings of the USENIX Winter 1994 Technical Conference on USENIX Winter 1994 Technical Conference
Single instance storage in Windows® 2000
WSS'00 Proceedings of the 4th conference on USENIX Windows Systems Symposium - Volume 4
Scale and performance in the CoBlitz large-file distribution service
NSDI'06 Proceedings of the 3rd conference on Networked Systems Design & Implementation - Volume 3
A hierarchical internet object cache
ATEC '96 Proceedings of the 1996 annual conference on USENIX Annual Technical Conference
Proceedings of the 10th International Conference on Information Integration and Web-based Applications & Services
Understanding and characterizing PlanetLab resource usage for federated network testbeds
Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference
Improving virtual appliance management through virtual layered file systems
LISA'11 Proceedings of the 25th international conference on Large Installation System Administration
Hi-index | 0.00 |
In virtual machine environments each application is often run in its own virtual machine (VM), isolating it from other applications running on the same physical machine. Contention for memory, disk space, and network bandwidth among virtual machines, coupled with an inability to share due to the isolation virtual machines provide, leads to heavy resource utilization. Additionally, VMs increase management overhead as each is essentially a separate system. Stork is a package management tool for virtual machine environments that is designed to alleviate these problems. Stork securely and efficiently downloads packages to physical machines and shares packages between VMs. Disk space and memory requirements are reduced because shared files, such as libraries and binaries, require only one persistent copy per physical machine. Experiments show that Stork reduces the disk space required to install additional copies of a package by over an order of magnitude, and memory by about 50%. Stork downloads each package once per physical machine no matter how many VMs install it. The transfer protocols used during download improve elapsed time by 7X and reduce repository traffic by an order of magnitude. Stork users can manage groups of VMs with the ease of managing a single machine - even groups that consist of machines distributed around the world. Stork is a real service that has run on PlanetLab for over four years and has managed thousands of VMs.