Data Redistribution and Remote Method Invocation in Parallel Component Architectures
IPDPS '05 Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Papers - Volume 01
Data redistribution and remote method invocation for coupled components
Journal of Parallel and Distributed Computing - 19th International parallel and distributed processing symposium
A GCM-based runtime support for parallel grid applications
Proceedings of the 2008 compFrame/HPC-GECO workshop on Component based high performance
Exploiting Latent I/O Asynchrony in Petascale Science Applications
International Journal of High Performance Computing Applications
Just in time: adding value to the IO pipelines of high performance applications with JITStaging
Proceedings of the 20th international symposium on High performance distributed computing
High end scientific codes with computational I/O pipelines: improving their end-to-end performance
Proceedings of the 2nd international workshop on Petascal data analytics: challenges and opportunities
In-situ I/O processing: a case for location flexibility
Proceedings of the sixth workshop on Parallel Data Storage
Hi-index | 0.00 |
Modern computational science applications are becoming increasingly multidisciplinary, involving widely distributed research teams and their underlying computational platforms. A common problem for the grid applications used in these environments is the necessity to couple multiple, parallel subsystems, with examples ranging from data exchanges between cooperating, linked parallel programs, to concurrent data streaming to distributed storage engines. This work presents the XChange/sub mxn/ middleware infrastructure for coupling componentized distributed applications. XChange/sub mxn/ implements the basic functionality of well-known services like the CCA Forum's MxN project, by providing efficient data redistribution across parallel application components. Beyond such basic functionality, however, XChange/sub mxn/ also addresses two of the problems faced by wide area scientific collaborations, which are (1) the need to deal with dynamic application/component behaviors, such as dynamic arrivals and departures due to the availability of additional resources, and (2) the need to 'match' data formats across disparate application components and research teams. In response to these needs, XChange/sub mxn/ uses an anonymous publish/subscribe model for linking interacting components, and the data being exchanged is dynamically specialized and transformed to match end point requirements. The pub/sub paradigm makes it easy to deal with dynamic component arrivals and departures. Dynamic data transformation enables the 'inflight' correction of data or needs mismatches for cooperating components. This work describes the design and implementation of XChange/sub mxn/, and it evaluates its implementation compared to those of less flexible transports like MPI. It also highlights the utility ofXChange/sub mxn/'s 'inflight' data specialization, by applying it to the SmartPointer parallel data visualization environment developed at our institution. Interestingly, using XChange/sub mxn/ did not significantly affect performance but led to a reduction in the size of the code base.