The MOSIX multicomputer operating system for high performance cluster computing
Future Generation Computer Systems - Special issue on HPCN '97
Matchmaking: Distributed Resource Management for High Throughput Computing
HPDC '98 Proceedings of the 7th IEEE International Symposium on High Performance Distributed Computing
User-Centric Performance Analysis of Market-Based Cluster Batch Schedulers
CCGRID '02 Proceedings of the 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid
SHARP: an architecture for secure resource peering
SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
Using PlanetLab for network research: myths, realities, and best practices
ACM SIGOPS Operating Systems Review
CoMon: a mostly-scalable monitoring system for PlanetLab
ACM SIGOPS Operating Systems Review
Tycoon: An implementation of a distributed, market-based resource allocation system
Multiagent and Grid Systems
Reliability and security in the CoDeeN content distribution network
ATEC '04 Proceedings of the annual conference on USENIX Annual Technical Conference
Democratizing content publication with coral
NSDI'04 Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation - Volume 1
Operating system support for planetary-scale network services
NSDI'04 Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation - Volume 1
Mirage: a microeconomic resource allocation system for sensornet testbeds
EmNets '05 Proceedings of the 2nd IEEE workshop on Embedded Networked Sensors
Experiences building PlanetLab
OSDI '06 Proceedings of the 7th USENIX Symposium on Operating Systems Design and Implementation - Volume 7
Service placement in a shared wide-area platform
ATEC '06 Proceedings of the annual conference on USENIX '06 Annual Technical Conference
Scale and performance in the CoBlitz large-file distribution service
NSDI'06 Proceedings of the 3rd conference on Networked Systems Design & Implementation - Volume 3
Stork: package management for distributed VM environments
LISA'07 Proceedings of the 21st conference on Large Installation System Administration Conference
Remote control: distributed application configuration, management, and visualization with plush
LISA'07 Proceedings of the 21st conference on Large Installation System Administration Conference
Design and implementation trade-offs for wide-area resource discovery
ACM Transactions on Internet Technology (TOIT)
Seattle: a platform for educational cloud computing
Proceedings of the 40th ACM technical symposium on Computer science education
Privacy-preserving P2P data sharing with OneSwarm
Proceedings of the ACM SIGCOMM 2010 conference
Measuring bandwidth between planetlab nodes
PAM'05 Proceedings of the 6th international conference on Passive and Active Network Measurement
Reducing allocation errors in network testbeds
Proceedings of the 2012 ACM conference on Internet measurement conference
Hi-index | 0.00 |
Global network testbeds are crucial for innovative network research. Built on the success of PlanetLab, the next generation of federated testbeds are under active development, but very little is known about resource usage in the shared infrastructures. In this paper, we conduct an extensive study of the usage profiles in PlanetLab that we collected for six years by running CoMon, a PlanetLab monitoring service. We examine various aspects of node-level behavior as well as experiment-centric behavior, and describe their implications for resource management in the federated testbeds. Our main contributions are threefold: (1) Contrary to common belief, our measurements show there is no tragedy of the commons in PlanetLab, since most PlanetLab experiments exploit the system's network reach more than just its hardware resources; (2) We examine resource allocation systems proposed for the federated testbeds, such as bartering and central banking schemes, and show that they would handle only a small percentage of the total usage in PlanetLab; and (3) Lastly, we identify factors that account for high resource contention or poor utilization in PlanetLab nodes. We analyze workload imbalance and problematic slices in PlanetLab, and describe the implications of our measurements for improving overall utility of the testbed.