Scheduling parallel program tasks onto arbitrary target machines
Journal of Parallel and Distributed Computing - Special issue: software tools for parallel programming and visualization
IEEE Transactions on Parallel and Distributed Systems
Static scheduling algorithms for allocating directed task graphs to multiprocessors
ACM Computing Surveys (CSUR)
A comparison of list schedules for parallel processing systems
Communications of the ACM
Journal of Parallel and Distributed Computing
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
IEEE Transactions on Parallel and Distributed Systems
DSC: Scheduling Parallel Tasks on an Unbounded Number of Processors
IEEE Transactions on Parallel and Distributed Systems
DFRN: A New Approach for Duplication Based Scheduling for Distributed Memory Multiprocessor Systems
IPPS '97 Proceedings of the 11th International Symposium on Parallel Processing
Scheduling strategies for mapping application workflows onto the grid
HPDC '05 Proceedings of the High Performance Distributed Computing, 2005. HPDC-14. Proceedings. 14th IEEE International Symposium
A performance-oriented adaptive scheduler for dependent tasks on grids
Concurrency and Computation: Practice & Experience - Middleware for Grid Computing: Future Trends (MGC2006)
Fulfilling Task Dependence Gaps for Workflow Scheduling on Grids
SITIS '07 Proceedings of the 2007 Third International IEEE Conference on Signal-Image Technologies and Internet-Based System
A Planner-Guided Scheduling Strategy for Multiple Workflow Applications
ICPPW '08 Proceedings of the 2008 International Conference on Parallel Processing - Workshops
IPDPS '09 Proceedings of the 2009 IEEE International Symposium on Parallel&Distributed Processing
An Autonomic Approach to Integrated HPC Grid and Cloud Usage
E-SCIENCE '09 Proceedings of the 2009 Fifth IEEE International Conference on e-Science
NP-complete scheduling problems
Journal of Computer and System Sciences
Cooperative and decentralized workflow scheduling in global grids
Future Generation Computer Systems
DAG Scheduling Using a Lookahead Variant of the Heterogeneous Earliest Finish Time Algorithm
PDP '10 Proceedings of the 2010 18th Euromicro Conference on Parallel, Distributed and Network-based Processing
WAINA '10 Proceedings of the 2010 IEEE 24th International Conference on Advanced Information Networking and Applications Workshops
A QoS-Based Scheduling Approach for Complex Workflow Applications
CHINAGRID '10 Proceedings of the The Fifth Annual ChinaGrid Conference
Scheduling multiple DAGs onto heterogeneous systems
IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
Service-Oriented Computing and Cloud Computing: Challenges and Opportunities
IEEE Internet Computing
Cloud Computing: The Limits of Public Clouds for Business Applications
IEEE Internet Computing
Hi-index | 0.00 |
Many large-scale scientific applications are usually constructed as workflows due to large amounts of interrelated computation and communication. Workflow scheduling has long been a research topic in parallel and distributed computing. However, most previous research focuses on single workflow scheduling. As cloud computing emerges, users can now have easy access to on-demand high performance computing resources, usually called HPC cloud. Since HPC cloud has to serve many users simultaneously, it is common that many workflows submitted from different users are running concurrently. Therefore, how to schedule concurrent workflows efficiently becomes an important issue in HPC cloud environments. Due to the dependency and communication costs between tasks in a workflow, there usually are gaps formed in the schedule of a workflow. In this paper, we propose a method which exploits such schedule gaps to efficiently schedule concurrent workflows in HPC cloud. The proposed scheduling method was evaluated with a series of simulation experiments and compared to the existing method in the literature. The results indicate that our method can deliver good performance and outperform the existing method significantly in terms of average makespan, up to 18% performance improvement.