MPI as a Programming Model for High-Performance Reconfigurable Computers

  • Authors:
  • Manuel Saldaña;Arun Patel;Christopher Madill;Daniel Nunes;Danyao Wang;Paul Chow;Ralph Wittig;Henry Styles;Andrew Putnam

  • Affiliations:
  • Arches Computing Systems;Arches Computing Systems;University of Toronto;University of Toronto;University of Toronto;University of Toronto;Xilinx, San Jose;Xilinx, San Jose;Xilinx, San Jose

  • Venue:
  • ACM Transactions on Reconfigurable Technology and Systems (TRETS)
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

High-Performance Reconfigurable Computers (HPRCs) consist of one or more standard microprocessors tightly-coupled with one or more reconfigurable FPGAs. HPRCs have been shown to provide good speedups and good cost/performance ratios, but not necessarily ease of use, leading to a slow acceptance of this technology. HPRCs introduce new design challenges, such as the lack of portability across platforms, incompatibilities with legacy code, users reluctant to change their code base, a prolonged learning curve, and the need for a system-level Hardware/Software co-design development flow. This article presents the evolution and current work on TMD-MPI, which started as an MPI-based programming model for Multiprocessor Systems-on-Chip implemented in FPGAs, and has now evolved to include multiple X86 processors. TMD-MPI is shown to address current design challenges in HPRC usage, suggesting that the MPI standard has enough syntax and semantics to program these new types of parallel architectures. Also presented is the TMD-MPI Ecosystem, which consists of research projects and tools that are developed around TMD-MPI to further improve HPRC usability. Finally, we present preliminary communication performance measurements.