Scheduling parallel program tasks onto arbitrary target machines
Journal of Parallel and Distributed Computing - Special issue: software tools for parallel programming and visualization
PAWS: A Performance Evaluation Tool for Parallel Computing Systems
Computer - Special issue on experimental research in computer architecture
PYRROS: static task scheduling and code generation for message passing multiprocessors
ICS '92 Proceedings of the 6th international conference on Supercomputing
A Parallel Simulated Annealing Algorithm with Low Communication Overhead
IEEE Transactions on Parallel and Distributed Systems
IEEE Transactions on Parallel and Distributed Systems
Benchmark Evaluation of the IBM SP2 for Parallel Signal Processing
IEEE Transactions on Parallel and Distributed Systems
On Exploiting Task Duplication in Parallel Program Scheduling
IEEE Transactions on Parallel and Distributed Systems
Solving Linear Algebraic Equations on an MIMD Computer
Journal of the ACM (JACM)
Benchmarking and comparison of the task graph scheduling algorithms
Journal of Parallel and Distributed Computing
Static scheduling algorithms for allocating directed task graphs to multiprocessors
ACM Computing Surveys (CSUR)
A comparison of list schedules for parallel processing systems
Communications of the ACM
Partitioning and Scheduling Parallel Programs for Multiprocessors
Partitioning and Scheduling Parallel Programs for Multiprocessors
Parallax: A Tool for Parallel Program Scheduling
IEEE Parallel & Distributed Technology: Systems & Technology
Modeling Communication Overhead: MPI and MPL Performance on the IBM SP2
IEEE Parallel & Distributed Technology: Systems & Technology
Hypertool: A Programming Aid for Message-Passing Systems
IEEE Transactions on Parallel and Distributed Systems
Interactive Parallel Programming using the ParaScope Editor
IEEE Transactions on Parallel and Distributed Systems
IEEE Transactions on Parallel and Distributed Systems
DSC: Scheduling Parallel Tasks on an Unbounded Number of Processors
IEEE Transactions on Parallel and Distributed Systems
Analysis, evaluation, and comparison of algorithms for scheduling task graphs on parallel processors
ISPAN '96 Proceedings of the 1996 International Symposium on Parallel Architectures, Algorithms and Networks
Fault-Tolerant Parallel Scheduling of Tasks on a Heterogeneous High-Performance Workstation Cluster
The Journal of Supercomputing
On Exploiting Heterogeneity for Cluster Based Parallel Multithreading Using Task Duplication
The Journal of Supercomputing
Euro-Par '02 Proceedings of the 8th International Euro-Par Conference on Parallel Processing
Integrated scheduling: the best of both worlds
Journal of Parallel and Distributed Computing
Performance and modularity benefits of message-driven execution
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
Existing parallel machines provide tremendous potential for high performance, but their programming can be a cumbersome and error-prone process. Software tools providing automatic functionalities free programmers from the nuisance of manual labor and can ensure better performance through code restructuring and optimization. This article describes an experimental software tool called CASCH (Computer Aided SCHeduling) for parallelizing and scheduling applications on message-passing multiprocessors. CASCH transforms a sequential program to a parallel program with automatic scheduling, mapping, communication, and synchronization. Its major strength is its extensive library of scheduling and mapping algorithms representing a broad range of state-of-the-art work reported in the recent literature. These algorithms can be interactively analyzed, tested, and compared using real data on a common platform with various performance objectives, enabling the programmer to select the most suitable algorithm for the application. CASCH, with its graphical interface, can be auspicious for both naive and expert programmers of parallel machines, and can also serve as a teaching and learning aid for understanding scheduling and mapping algorithms.