Simultaneous multithreading: maximizing on-chip parallelism
ISCA '95 Proceedings of the 22nd annual international symposium on Computer architecture
Effective distributed scheduling of parallel workloads
Proceedings of the 1996 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
A closer look at coscheduling approaches for a network of workstations
Proceedings of the eleventh annual ACM symposium on Parallel algorithms and architectures
The impact of job memory requirements on gang-scheduling performance
ACM SIGMETRICS Performance Evaluation Review
Job scheduling in the presence of multiple resource requirements
SC '99 Proceedings of the 1999 ACM/IEEE conference on Supercomputing
A simulation-based study of scheduling mechanisms for a dynamic cluster environment
Proceedings of the 14th international conference on Supercomputing
IEEE Transactions on Parallel and Distributed Systems
A Slowdown Model for Applications Executing on Time-Shared Clusters of Workstations
IEEE Transactions on Parallel and Distributed Systems
Impact of Workload and System Parameters on Next Generation Cluster Scheduling Mechanisms
IEEE Transactions on Parallel and Distributed Systems
An infrastructure for efficient parallel job execution in Terascale computing environments
SC '98 Proceedings of the 1998 ACM/IEEE conference on Supercomputing
Adaptive Scheduling under Memory Pressure on Multiprogrammed SMPs
IPDPS '02 Proceedings of the 16th International Parallel and Distributed Processing Symposium
IPPS '99/SPDP '99 Proceedings of the 13th International Symposium on Parallel Processing and the 10th Symposium on Parallel and Distributed Processing
Implications of I/O for Gang Scheduled Workloads
IPPS '97 Proceedings of the Job Scheduling Strategies for Parallel Processing
A Historical Application Profiler for Use by Parallel Schedulers
IPPS '97 Proceedings of the Job Scheduling Strategies for Parallel Processing
Dynamic Coscheduling on Workstation Clusters
IPPS/SPDP '98 Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing
Characteristics of a Large Shared Memory Production Workload
JSSPP '01 Revised Papers from the 7th International Workshop on Job Scheduling Strategies for Parallel Processing
IPDPS '03 Proceedings of the 17th International Symposium on Parallel and Distributed Processing
Gang Scheduling with Memory Considerations
IPDPS '00 Proceedings of the 14th International Symposium on Parallel and Distributed Processing
IEEE Transactions on Parallel and Distributed Systems
The workload on parallel supercomputers: modeling the characteristics of rigid jobs
Journal of Parallel and Distributed Computing
Concurrency and Computation: Practice & Experience
LOMARC — lookahead matchmaking for multi-resource coscheduling
JSSPP'04 Proceedings of the 10th international conference on Job Scheduling Strategies for Parallel Processing
Time and space adaptation for computational grids with the ATOP-Grid middleware
Future Generation Computer Systems
Hi-index | 0.00 |
Job scheduling typically focuses on the CPU with little work existing to include I/O or memory. Time-shared execution provides the chance to hide I/O and long-communication latencies though potentially creating a memory conflict. Hyperthreaded CPUs support coscheduling without any context switches and provide additional options for CPU-internal resource sharing. We present an approach that includes all possible resources into the schedule optimization and improves utilization by coscheduling two jobs if feasible. Our LOMARC approach partially reorders the queue by lookahead to increase the potential to find good matches. In simulations based on the workload model of Lublin and Feitelson [CHECK END OF SENTENCE], we have obtained improvements between 30 percent and 50 percent in both response times and relative bounded response times on hyperthreaded CPUs (i.e., cut times to two third or to half).