A versatile, low latency HyperTransport core
Proceedings of the 2007 ACM/SIGDA 15th international symposium on Field programmable gate arrays
Proceedings of the 2007 ACM/IEEE conference on Supercomputing
An open-source HyperTransport core
ACM Transactions on Reconfigurable Technology and Systems (TRETS)
Overcoming the processor communication overhead in MPI applications
SpringSim '07 Proceedings of the 2007 spring simulation multiconference - Volume 2
Grid-based deployment and performance measurement of the Weather Research & Forecasting model
Future Generation Computer Systems
A new ultra-low latency message transfer mechanism
CSN '07 Proceedings of the Sixth IASTED International Conference on Communication Systems and Networks
A preliminary analysis of the infinipath and XD1 network interfaces
IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
Using triggered operations to offload rendezvous messages
EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface
Hi-index | 0.00 |
Clusters are now a dominant model for high-capacity, scalable computing based on a commodity cost structure. This paper describes the first generation PathScale™ InfiniPath™ adapter — a single chip ASIC directly connecting HyperTransport™ attached processors, such as the AMD Opteron™, to the InfiniBand™ network fabric. In addition to providing ultra-low communications latency, the PathScale InfiniPath adapter achieves high bandwidth from very small to large message sizes. Its performance also scales on multi-core processor nodes. Use of the InfiniBand switching fabric permits high bandwidth to be realized at a commodity fabric price point.