Parallelizing dense and banded linear algebra libraries using SMPSs

  • Authors:
  • Rosa M. Badia;José R. Herrero;Jesús Labarta;Josep M. Pérez;Enrique S. Quintana-Ortí;Gregorio Quintana-Ortí

  • Affiliations:
  • Barcelona Supercomputing Ctr., Centro Nacional de Supercomputación (BSC-CNS) and Universitat Politècnica de Catalunya, 08034 Barcelona, Spain and Consejo Superior de Investigaciones Cien ...;Departmento de Arquitectura de Computadores, Universitat Politècnica de Catalunya, 08034 Barcelona, Spain;Barcelona Supercomputing Center, Centro Nacional de Supercomputación (BSC-CNS) and Universitat Politècnica de Catalunya, Nexus II Building, C. Jordi Girona 29, 08034 Barcelona, Spain;Barcelona Supercomputing Center, Centro Nacional de Supercomputación (BSC-CNS) and Universitat Politècnica de Catalunya, Nexus II Building, C. Jordi Girona 29, 08034 Barcelona, Spain;Departmento de Ingeniería y Ciencia de Computadores, Universidad Jaume I, 12.071 Castellón, Spain;Departmento de Ingeniería y Ciencia de Computadores, Universidad Jaume I, 12.071 Castellón, Spain

  • Venue:
  • Concurrency and Computation: Practice & Experience
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The promise of future many-core processors, with hundreds of threads running concurrently, has led the developers of linear algebra libraries to rethink their design in order to extract more parallelism, further exploit data locality, attain better load balance, and pay careful attention to the critical path of computation. In this paper we describe how existing serial libraries such as (C)LAPACK and FLAME can be easily parallelized using the SMPSs tools, consisting of a few OpenMP-like pragmas and a run-time system. In the LAPACK case, this usually requires the development of blocked algorithms for simple BLAS-level operations, which expose concurrency at a finer grain. For better performance, our experimental results indicate that column-major order, as employed by this library, needs to be abandoned in benefit of a block data layout. This will require a deeper rewrite of LAPACK or, alternatively, a dynamic conversion of the storage pattern at run-time. The parallelization of FLAME routines using SMPSs is simpler as this library includes blocked algorithms (or algorithms-by-blocks in the FLAME argot) for most operations and storage-by-blocks (or block data layout) is already in place. Copyright © 2009 John Wiley & Sons, Ltd.