Analysis and evaluation of heuristic methods for static task scheduling
Journal of Parallel and Distributed Computing
A Framework for Exploiting Task and Data Parallelism on Distributed Memory Multicomputers
IEEE Transactions on Parallel and Distributed Systems
Scheduling Algorithms for Parallel Gaussian Elimination With Communication Costs
IEEE Transactions on Parallel and Distributed Systems
Scheduling parallel tasks with individual deadlines
Theoretical Computer Science
Journal of Parallel and Distributed Computing
Partitioning and Scheduling Parallel Programs for Multiprocessors
Partitioning and Scheduling Parallel Programs for Multiprocessors
Performance-Effective and Low-Complexity Task Scheduling for Heterogeneous Computing
IEEE Transactions on Parallel and Distributed Systems
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Approaches for Integrating Task and Data Parallelism
IEEE Concurrency
IEEE Transactions on Parallel and Distributed Systems
Scheduling Parallel Applications Using Malleable Tasks on Clusters
IPDPS '01 Proceedings of the 15th International Parallel & Distributed Processing Symposium
A Static Scheduling Heuristic for Heterogeneous Processors
Euro-Par '96 Proceedings of the Second International Euro-Par Conference on Parallel Processing-Volume II
Task Scheduling Algorithms for Heterogeneous Processors
HCW '99 Proceedings of the Eighth Heterogeneous Computing Workshop
Parallel Program Execution on a Heterogeneous PC Cluster Using Task Duplication
HCW '00 Proceedings of the 9th Heterogeneous Computing Workshop
Linear Algebra Algorithms in Heterogeneous Cluster of Personal Computers
HCW '00 Proceedings of the 9th Heterogeneous Computing Workshop
Comparison of Contention Aware List Scheduling Heuristics for Cluster Computing
ICPPW '01 Proceedings of the 2001 International Conference on Parallel Processing Workshops
LAPACK Working Note 58: ``The Design of Linear Algebra Libraries for High Performance Computers
LAPACK Working Note 58: ``The Design of Linear Algebra Libraries for High Performance Computers
Design and Implementation of the ScaLAPACK LU, QR, and Cholesky Factorization Routines
Scientific Programming
Hi-index | 0.00 |
The aim of data and task parallel scheduling for dense linear algebra kernels is to minimize the processing time of an application composed by several linear algebra kernels. The scheduling strategy presented here combines the task parallelism used when scheduling independent tasks and the data parallelism used for linear algebra kernels. This problem has been studied for scheduling independent tasks on homogeneous machines. Here it is proposed a methodology for heterogeneous clusters and it is shown that significant improvements can be achieved with this strategy.