The Processor Working Set and its Use in Scheduling Multiprocessor Systems
IEEE Transactions on Software Engineering
A dynamic processor allocation policy for multiprogrammed shared-memory multiprocessors
ACM Transactions on Computer Systems (TOCS)
Scheduling Parallel Machines On-line
SIAM Journal on Computing
Evaluation of design choices for gang scheduling using distributed hierarchical control
Journal of Parallel and Distributed Computing
SOSP '01 Proceedings of the eighteenth ACM symposium on Operating systems principles
Scheduling Divisible Loads in Parallel and Distributed Systems
Scheduling Divisible Loads in Parallel and Distributed Systems
Scheduling Computer and Manufacturing Processes
Scheduling Computer and Manufacturing Processes
Parallel Real Root Isolation Using the Descartes Method
HiPC '99 Proceedings of the 6th International Conference on High Performance Computing
Analysis of Non-Work-Conserving Processor Partitioning Policies
IPPS '95 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Demand-Based Coscheduling of Parallel Jobs on Multiprogrammed Multiprocessors
IPPS '95 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Parallel Job Scheduling: Issues and Approaches
IPPS '95 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Parallel Application Characteristics for Multiprocessor Scheduling Policy Design
IPPS '96 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Using Runtime Measured Workload Characteristics in Parallel Processor Scheduling
IPPS '96 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Towards Convergence in Job Schedulers for Parallel Supercomputers
IPPS '96 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Performance Evaluation of Gang Scheduling for Parallel and Distributed Multiprogramming
IPPS '97 Proceedings of the Job Scheduling Strategies for Parallel Processing
Theory and Practice in Parallel Job Scheduling
IPPS '97 Proceedings of the Job Scheduling Strategies for Parallel Processing
Improved Utilization and Responsiveness with Gang Scheduling
IPPS '97 Proceedings of the Job Scheduling Strategies for Parallel Processing
Metrics and Benchmarking for Parallel Job Scheduling
IPPS/SPDP '98 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Self-Adjusting Scheduling of Master-Worker Applications on Distributed Clusters
Euro-Par '01 Proceedings of the 7th International Euro-Par Conference Manchester on Parallel Processing
On-Line Scheduling of Parallelizable Jobs
Euro-Par '98 Proceedings of the 4th International Euro-Par Conference on Parallel Processing
Developments from a June 1996 seminar on Online algorithms: the state of the art
A parallel workload model and its implications for processor allocation
HPDC '97 Proceedings of the 6th IEEE International Symposium on High Performance Distributed Computing
A Simulation - Based Performance Analysis of Gang Scheduling in a Distributed System
SS '99 Proceedings of the Thirty-Second Annual Simulation Symposium
Handbook of Scheduling: Algorithms, Models, and Performance Analysis
Handbook of Scheduling: Algorithms, Models, and Performance Analysis
Journal of Systems and Software
Analysis and evaluation of grid scheduling algorithms using real workload traces
Proceedings of the International Conference on Management of Emergent Digital EcoSystems
Hi-index | 0.04 |
The optimization of parallel applications is difficult to achieve by classical optimization techniques because of their diversity and the variety of actual parallel and distributed platforms and/or environments. Adaptive algorithmic schemes, capable of dynamically changing the allocation of jobs during the execution to optimize global system behavior, are the best alternatives for solving this problem. In this paper, we focus on non-clairvoyant scheduling of parallel jobs with known resource requirements but unknown running times, with emphasis on the regulation of idle periods in the context of general list policies. We consider a new family of scheduling strategies based on two phases which successively combine sequential and parallel execution of jobs. We generalize known worst-case performance bounds by considering two extra parameters, in addition to the number of processors and maximum processor requirements considered in the literature, namely, job parallelization penalty and idle regulation factor. Furthermore, we prove that under certain conditions of idle regulation, the performance guarantee of parallel job scheduling in space-sharing mode can be improved.