The SPLASH-2 programs: characterization and methodological considerations
ISCA '95 Proceedings of the 22nd annual international symposium on Computer architecture
NAS Grid Benchmarks: A Tool for Grid Space Exploration
Cluster Computing
A framework for performance modeling and prediction
Proceedings of the 2002 ACM/IEEE conference on Supercomputing
The Development of Parkbench and Performance Prediction
International Journal of High Performance Computing Applications
Future Generation Computer Systems
Performability modeling for scheduling and fault tolerance strategies for scientific workflows
HPDC '08 Proceedings of the 17th international symposium on High performance distributed computing
AssessGrid Strategies for Provider Ranking Mechanisms in Risk---Aware Grid Systems
GECON '08 Proceedings of the 5th international workshop on Grid Economics and Business Models
A performance study of grid workflow engines
GRID '08 Proceedings of the 2008 9th IEEE/ACM International Conference on Grid Computing
CROWNBench: a grid performance testing system using customizable synthetic workload
APWeb'08 Proceedings of the 10th Asia-Pacific web conference on Progress in WWW research and development
A Robust and Efficient Message Passing Library for Volunteer Computing Environments
Journal of Grid Computing
User level grid quality of service
LSSC'09 Proceedings of the 7th international conference on Large-Scale Scientific Computing
International Journal of Business Information Systems
Predictable quality of service atop degradable distributed systems
Cluster Computing
A job submission manager for large-scale distributed systems based on job futurity predictor
International Journal of Grid and Utility Computing
Hi-index | 0.00 |
In this work we report on data gathered via a deployment of a monitoring and benchmarking infrastructure on two production grid platforms, TeraGrid and Geon. Our result show that these production grids are rather unavailable, with success rates for benchmark and application runs between 55% and 80%. We also found that performance fluctuation was in the 50% range, expectedly mostly due to batch schedulers. We also investigate whether the execution time of a typical grid application can be predicated based on previous runs of simple benchmarks. Perhaps surprisingly, we find that application execution time can be predicted with a relative error as low as 9%.