A parallel numerical solver using hierarchically tiled arrays

  • Authors:
  • James C. Brodman;G. Carl Evans;Murat Manguoglu;Ahmed Sameh;María J. Garzarán;David Padua

  • Affiliations:
  • University of Illinois at Urbana-Champaign, Dept. of Computer Science;University of Illinois at Urbana-Champaign, Dept. of Computer Science;Purdue University, Dept. of Computer Science;Purdue University, Dept. of Computer Science;University of Illinois at Urbana-Champaign, Dept. of Computer Science;University of Illinois at Urbana-Champaign, Dept. of Computer Science

  • Venue:
  • LCPC'10 Proceedings of the 23rd international conference on Languages and compilers for parallel computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Solving linear systems is an important problem for scientific computing. Exploiting parallelism is essential for solving complex systems, and this traditionally involves writing parallel algorithms on top of a library such as MPI. The SPIKE family of algorithms is one well-known example of a parallel solver for linear systems. The Hierarchically Tiled Array data type extends traditional data-parallel array operations with explicit tiling and allows programmers to directly manipulate tiles. The tiles of the HTA data type map naturally to the block nature of many numeric computations, including the SPIKE family of algorithms. The higher level of abstraction of the HTA enables the same program to be portable across different platforms. Current implementations target both shared-memory and distributed-memory models. In this paper we present a proof-of-concept for portable linear solvers. We implement two algorithms from the SPIKE family using the HTA library. We show that our implementations of SPIKE exploit the abstractions provided by the HTA to produce a compact, clean code that can run on both shared-memory and distributed-memory models without modification. We discuss how we map the algorithms to HTA programs as well as examine their performance. We compare the performance of our HTA codes to comparable codes written in MPI as well as current state-of-the-art linear algebra routines.