The XtreemOS jScheduler: using self-scheduling techniques in large computing architectures
LASCO'08 First USENIX Workshop on Large-Scale Computing
Grid broker selection strategies using aggregated resource information
Future Generation Computer Systems
GMBS: A new middleware service for making grids interoperable
Future Generation Computer Systems
A job self-scheduling policy for HPC infrastructures
JSSPP'07 Proceedings of the 13th international conference on Job scheduling strategies for parallel processing
Enabling Interoperability among Grid Meta-Schedulers
Journal of Grid Computing
Hi-index | 0.00 |
Job scheduling policies for HPC centers have been extensively studied during these last years, specially backfilling based policies. Almost all of these studies have been done using simulation tools. These tools evaluate the performance of scheduling policies using the workloads and the resource definition as an input. To the best of our knowledge, all the existent simulators use the runtime (either requested or real) provided in the workload as a basis of their simulations. However, the runtime of a job, even executed with a fixed number of processors, depends on runtime issues such as the specific resource selection policy used for allocate the jobs or the resource jobs requirements. This paper is the first part of a more complex research project that analyzes the impact in the system performance of considering the resource sharing of running jobs. With this purpose we have included in our job scheduler simulator (the Alvio simulator) a performance model that estimates the penalty introduced in the application runtime when sharing the memory bandwidth. Experiments have been conducted with two resource selection policies and we present both the impact from the point of view of global performance metrics, such as average slowdown, and per job impact such as percentage of penalized runtime.