Executing a Program on the MIT Tagged-Token Dataflow Architecture
IEEE Transactions on Computers
The RMIT data flow computer: a hybrid architecture
The Computer Journal - Special issue on parallel computing
An automatic design optimization tool and its application to computational fluid dynamics
Proceedings of the 2001 ACM/IEEE conference on Supercomputing
High Performance Parametric Modeling with Nimrod/G: Killer Application for the Global Grid?
IPDPS '00 Proceedings of the 14th International Symposium on Parallel and Distributed Processing
GridRod: a dynamic runtime scheduler for grid workflows
Proceedings of the 21st annual international conference on Supercomputing
Fractional factorial design for parameter sweep experiments using Nimrod/E
Scientific Programming - Large-Scale Programming Tools and Environments
Nimrod/K: towards massively parallel dynamic grid workflows
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
Scheduling Multiple Parameter Sweep Workflow Instances on the Grid
E-SCIENCE '09 Proceedings of the 2009 Fifth IEEE International Conference on e-Science
Globus toolkit version 4: software for service-oriented systems
NPC'05 Proceedings of the 2005 IFIP international conference on Network and Parallel Computing
Model optimization and parameter estimation with nimrod/o
ICCS'06 Proceedings of the 6th international conference on Computational Science - Volume Part I
A framework for the design and reuse of grid workflows
SAG'04 Proceedings of the First international conference on Scientific Applications of Grid Computing
Hi-index | 0.00 |
Scientific workflow tools allow users to specify complex computational experiments and provide a good framework for robust science and engineering. Workflows consist of pipelines of tasks that can be used to explore the behaviour of some system, involving computations that are either performed locally or on remote computers. Robust scientific methods require the exploration of the parameter space of a system (some of which can be run in parallel on distributed resources), and may involve complete state space exploration, experimental design or numerical optimization techniques. Whilst workflow engines provide an overall framework, they have not been developed with these concepts in mind, and in general, don't provide the necessary components to implement robust workflows. In this paper we discuss Nimrod/K - a set of add in components and a new run time machine for a general workflow engine, Kepler. Nimrod/K provides an execution architecture based on the tagged dataflow concepts developed in 1980's for highly parallel machines. This is embodied in a new Kepler 'Director' that orchestrates the execution on clusters, Grids and Clouds using many-task computing. Nimrod/K also provides a set of 'Actors' that facilitate the various modes of parameter exploration discussed above. We demonstrate the power of Nimrod/K to solve real problems in cardiac science.