An operating systems vade mecum; (2nd ed.)
An operating systems vade mecum; (2nd ed.)
Introduction to algorithms
Algorithms from P to NP (vol. 1): design and efficiency
Algorithms from P to NP (vol. 1): design and efficiency
Attacking the bottlenecks of backfilling schedulers
Cluster Computing
IPPS '99/SPDP '99 Proceedings of the 13th International Symposium on Parallel Processing and the 10th Symposium on Parallel and Distributed Processing
The ANL/IBM SP Scheduling System
IPPS '95 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Implementing Multiprocessor Scheduling Disciplines
IPPS '97 Proceedings of the Job Scheduling Strategies for Parallel Processing
A Historical Application Profiler for Use by Parallel Schedulers
IPPS '97 Proceedings of the Job Scheduling Strategies for Parallel Processing
Job Scheduling Scheme for Pure Space Sharing Among Rigid Jobs
IPPS/SPDP '98 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Core Algorithms of the Maui Scheduler
JSSPP '01 Revised Papers from the 7th International Workshop on Job Scheduling Strategies for Parallel Processing
An Integrated Approach to Parallel Scheduling Using Gang-Scheduling, Backfilling, and Migration
JSSPP '01 Revised Papers from the 7th International Workshop on Job Scheduling Strategies for Parallel Processing
Job-Length Estimation and Performance in Backfilling Schedulers
HPDC '99 Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computing
Job Scheduling in Mesh Multicomputers
ICPP '94 Proceedings of the 1994 International Conference on Parallel Processing - Volume 02
Another approach to backfilled jobs: applying virtual malleability to expired windows
Proceedings of the 19th annual international conference on Supercomputing
Backfilling with lookahead to optimize the packing of parallel jobs
Journal of Parallel and Distributed Computing
On the User-Scheduler Dialogue: Studies of User-Provided Runtime Estimates and Utility Functions
International Journal of High Performance Computing Applications
Selective preemption strategies for parallel job scheduling
International Journal of High Performance Computing and Networking
Incentives to Tight the Runtime Estimates of EASY Backfilling
ICDCN '09 Proceedings of the 10th International Conference on Distributed Computing and Networking
Multi-Site Allocation Policies on a Grid and Local Level
Electronic Notes in Theoretical Computer Science (ENTCS)
PV-EASY: a strict fairness guaranteed and prediction enabled scheduler in parallel job scheduling
Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing
Optimal job packing, a backfill scheduling optimization for a cluster of workstations
The Journal of Supercomputing
Contention-aware node allocation policy for high-performance capacity systems
Proceedings of the 2012 Interconnection Network Architecture: On-Chip, Multi-Chip Workshop
Are user runtime estimates inherently inaccurate?
JSSPP'04 Proceedings of the 10th international conference on Job Scheduling Strategies for Parallel Processing
A Hybrid Scheduling Algorithm for Data Intensive Workloads in a MapReduce Environment
UCC '12 Proceedings of the 2012 IEEE/ACM Fifth International Conference on Utility and Cloud Computing
Multiple objective scheduling of HPC workloads through dynamic prioritization
Proceedings of the High Performance Computing Symposium
Toward balanced and sustainable job scheduling for production supercomputers
Parallel Computing
Hi-index | 0.00 |
Backfill is a technique in which lower priority jobs requiring fewer resources are initiated before one or more currently waiting higher priority jobs requiring as yet unavailable resources. Processors are frequently the resource involved and the purpose of backfilling is to increase system utilization and reduce average wait time. Generally, a scheduler backfills when the user-specified run times indicate that executing the lower priority jobs will not delay the anticipated initiation of the higher priority jobs. This paper explores the possibility of using a relaxed backfill strategy in which the lower priority jobs are initiated as long as they do not delay the highest priority job too much. A simulator was developed to model this approach; it uses a parameter ? to control the length of the acceptable delay as a factor times the wait time of the highest priority job. Experiments were performed for a range of ? values with both user-estimated run times and actual run times using workload data from two parallel systems, a Cray T3E and an SGI Origin 3800. For these workloads, overall average job wait time typically decreases as 驴 increases and use of user-estimated run times is superior to use of actual run times. More experiments must be performed to determine the generality of these results.