Optimal algorithms for all-to-all personalized communication on rings and two dimensional tori
Journal of Parallel and Distributed Computing
LogGP: incorporating long messages into the LogP model for parallel computation
Journal of Parallel and Distributed Computing
Performance modeling and evaluation of MPI
Journal of Parallel and Distributed Computing
LoGPC: Modeling Network Contention in Message-Passing Programs
IEEE Transactions on Parallel and Distributed Systems
LogGPS: a parallel computational model for synchronization analysis
PPoPP '01 Proceedings of the eighth ACM SIGPLAN symposium on Principles and practices of parallel programming
Modeling Communication Overhead: MPI and MPL Performance on the IBM SP2
IEEE Parallel & Distributed Technology: Systems & Technology
Performance Evaluation and Modeling of the Fujitsu AP3000 Message-Passing Libraries
Euro-Par '99 Proceedings of the 5th International Euro-Par Conference on Parallel Processing
Reproducible Measurements of MPI Performance Characteristics
Proceedings of the 6th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Performance Analysis of MPI Collective Operations
IPDPS '05 Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Workshop 15 - Volume 16
A Practical Approach to the Rating of Barrier Algorithms Using the LogP Model and Open MPI
ICPPW '05 Proceedings of the 2005 International Conference on Parallel Processing Workshops
Collective communication: theory, practice, and experience: Research Articles
Concurrency and Computation: Practice & Experience
Optimal broadcast for fully connected processor-node networks
Journal of Parallel and Distributed Computing
Topology aware task mapping techniques: an api and case study
Proceedings of the 14th ACM SIGPLAN symposium on Principles and practice of parallel programming
Accurate analytical performance model of communications in MPI applications
IPDPS '09 Proceedings of the 2009 IEEE International Symposium on Parallel&Distributed Processing
Performance modeling in action: Performance prediction of a Cray XT4 system during upgrade
IPDPS '09 Proceedings of the 2009 IEEE International Symposium on Parallel&Distributed Processing
MPI Collective Communications on The Blue Gene/P Supercomputer: Algorithms and Optimizations
HOTI '09 Proceedings of the 2009 17th IEEE Symposium on High Performance Interconnects
Process cooperation in multiple message broadcast
Parallel Computing
IBM Journal of Research and Development
Bridging performance analysis tools and analytic performance modeling for HPC
Euro-Par 2010 Proceedings of the 2010 conference on Parallel processing
Performance modeling for systematic performance tuning
State of the Practice Reports
Hi-index | 0.00 |
Designing and tuning parallel applications with MPI, particularly at large scale, requires understanding the performance implications of different choices of algorithms and implementation options. Which algorithm is better depends in part on the performance of the different possible communication approaches, which in turn can depend on both the system hardware and the MPI implementation. In the absence of detailed performance models for different MPI implementations, application developers often must select methods and tune codes without the means to realistically estimate the achievable performance and rationally defend their choices. In this paper, we advocate the construction of more useful performance models that take into account limitations on network-injection rates and effective bisection bandwidth. Since collective communication plays a crucial role in enabling scalability, we also provide analytical models for scalability of collective communication algorithms, such as broadcast, allreduce, and all-to-all. We apply these models to an IBM Blue Gene/P system and compare the analytical performance estimates with experimentally measured values.