MPI: a message passing interface
Proceedings of the 1993 ACM/IEEE conference on Supercomputing
BEE2: A High-End Reconfigurable Computing System
IEEE Design & Test
Toward Scalable Performance Visualization with Jumpshot
International Journal of High Performance Computing Applications
A Scalable FPGA-based Multiprocessor
FCCM '06 Proceedings of the 14th Annual IEEE Symposium on Field-Programmable Custom Computing Machines
RECONFIG '08 Proceedings of the 2008 International Conference on Reconfigurable Computing and FPGAs
Open MPI: a flexible high performance MPI
PPAM'05 Proceedings of the 6th international conference on Parallel Processing and Applied Mathematics
VirtualRC: a virtual FPGA platform for applications and tools portability
Proceedings of the ACM/SIGDA international symposium on Field Programmable Gate Arrays
A remote memory access infrastructure for global address space programming models in FPGAs
Proceedings of the ACM/SIGDA international symposium on Field programmable gate arrays
Session types: towards safe and fast reconfigurable programming
ACM SIGARCH Computer Architecture News - ACM SIGARCH Computer Architecture News/HEART '12
Hi-index | 0.00 |
High-Performance Reconfigurable Computers (HPRCs) consist of one or more standard microprocessors tightly-coupled with one or more reconfigurable FPGAs. HPRCs have been shown to provide good speedups and good cost/performance ratios, but not necessarily ease of use, leading to a slow acceptance of this technology. HPRCs introduce new design challenges, such as the lack of portability across platforms, incompatibilities with legacy code, users reluctant to change their code base, a prolonged learning curve, and the need for a system-level Hardware/Software co-design development flow. This article presents the evolution and current work on TMD-MPI, which started as an MPI-based programming model for Multiprocessor Systems-on-Chip implemented in FPGAs, and has now evolved to include multiple X86 processors. TMD-MPI is shown to address current design challenges in HPRC usage, suggesting that the MPI standard has enough syntax and semantics to program these new types of parallel architectures. Also presented is the TMD-MPI Ecosystem, which consists of research projects and tools that are developed around TMD-MPI to further improve HPRC usability. Finally, we present preliminary communication performance measurements.