Parallelizing molecular dynamics programs for distributed memory machines: an application of the CHAOS runtime support library
Decoupling synchronization and data transfer in message passing systems of parallel computers
ICS '95 Proceedings of the 9th international conference on Supercomputing
SC '98 Proceedings of the 1998 ACM/IEEE conference on Supercomputing
MPI: The Complete Reference
A Comparison of Three Gigabit Technologies: SCI, Myrinet and SGI/Cray T3D
SCI: Scalable Coherent Interface, Architecture and Software for High-Performance Compute Clusters
Studying Protein Folding on the Grid: Experiences Using CHARMM on NPACI Resources under Legion
HPDC '01 Proceedings of the 10th IEEE International Symposium on High Performance Distributed Computing
Parallelizing a DNA Simulation Code for the Cray MTA-2
CSB '02 Proceedings of the IEEE Computer Society Conference on Bioinformatics
Hi-index | 0.00 |
The molecular dynamics code CHARMM is a popular research tool for computational biology. An increasing number of researchers are currently looking for affordable and adequate platforms to execute CHARMM or similar codes.To address this need, we analyze the resource requirements of a CHARMM molecular dynamics simulation on PC clusters with a particle mesh Ewald (PME) treatment of longrange electrostatics, and investigate the scalability of the short-range interactions and PME separately. We look at the workload characterization and the performance gain of CHARMM with different network technologies and different software infrastructures and show that the performance depends more on the software infrastructures than on the hardware components. In the present study, powerful communication systems like Myrinet deliver performance that comes close to the MPP supercomputers of the past decade (e.g. Cray T3D), but improved scalability can also be achieved with better communication system software like SCore without the additional hardware cost.The experimental method of workload characterization presented can be easily applied to other codes. The detailed performance figures of the breakdown of the calculation into computation, communication and synchronization allow to derive good estimates about the benefits of moving applications to novel computing platforms such as widely distributed computers (grid).