Astrophysical N-body simulations using hierarchical tree data structures
Proceedings of the 1992 ACM/IEEE conference on Supercomputing
A parallel hashed Oct-Tree N-body algorithm
Proceedings of the 1993 ACM/IEEE conference on Supercomputing
50 GFlops molecular dynamics on the Connection Machine 5
Proceedings of the 1993 ACM/IEEE conference on Supercomputing
Message-passing multi-cell molecular dynamics on the Connection Machine 5
Parallel Computing
Controlling the data glut in large-scale molecular-dynamics simulations
Computers in Physics
Lightweight computational steering of very large scale molecular dynamics simulations
Supercomputing '96 Proceedings of the 1996 ACM/IEEE conference on Supercomputing
Performance of Various Computers Using Standard Linear Equations Software
Performance of Various Computers Using Standard Linear Equations Software
Recent advances in large-scale atomistic materials simulations
Computing in Science and Engineering
Scaling of Beowulf-class distributed systems
SC '98 Proceedings of the 1998 ACM/IEEE conference on Supercomputing
An Architecture for Management of Large, Distributed, Scientific Data Using SQL/MED and XML
EDBT '00 Proceedings of the 7th International Conference on Extending Database Technology: Advances in Database Technology
High-density computing: a 240-processor Beowulf in one cubic meter
Proceedings of the 2002 ACM/IEEE conference on Supercomputing
Making a Case for Efficient Supercomputing
Queue - Power Management
The Space Simulator: Modeling the Universe from Supernovae to Cosmology
Proceedings of the 2003 ACM/IEEE conference on Supercomputing
Proceedings of the 2006 ACM/IEEE conference on Supercomputing
10Gb/s Ethernet performance and retrospective
ACM SIGCOMM Computer Communication Review
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
190 TFlops Astrophysical N-body Simulation on a Cluster of GPUs
Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis
Scalable fast multipole methods on distributed heterogeneous architectures
Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis
4.45 Pflops astrophysical N-body simulation on K computer: the gravitational trillion-body problem
SC '12 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
2HOT: an improved parallel hashed oct-tree n-body algorithm for cosmological simulation
SC '13 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
Hi-index | 0.00 |
As an entry for the 1998 Gordon Bell price/performance prize, we present two calculations from the disciplines of condensed matter physics and astrophysics. The simulations were performed on a 70 processor DEC Alpha cluster (Avalon) constructed entirely from commodity personal computer technology and freely available software, for a cost of 152 thousand dollars.Avalon performed a 60 million particle molecular dynamics (MD) simulation of shock-induced plasticity using the SPaSM MD code. The beginning of this simulation sustained approximately 10 Gflops over a 44 hour period, and saved 68 Gbytes of raw data. The resulting price/performance is $15/Mflop, or equivalently, 67 Gflops per million dollars. This is more than a factor of three better than last year's Gordon Bell price/performance winners. This simulation is similar to those which won part of the 1993 Gordon Bell performance prize using a 1024-node CM-5. This simulation continued to run for a total of 332 hours on Avalon, computing a total of 1.12 x 1016 floating point operations. This puts it among the few scientific simulations to have ever involved more than 10 Petaflops of computation.Avalon also performed a gravitational treecode N-body simulation of galaxy formation using 9.75 million particles, which sustained an average of 6.78 Gflops over a 26 hour period. This simulation is exactly the same as that which won a Gordon Bell price/performance prize last year on the Loki cluster, at a total performance 7.7 times that of Loki, and a price/performance 2.6 times better than Loki. Further, Avalon ranked at 315th on the June 1998 TOP500 list, by obtaining a result of 19.3 Gflops on the parallel Linpack benchmark.