Improving Linear Algebra Computation on NUMA Platforms through Auto-tuned Nested Parallelism

  • Authors:
  • Javier Cuenca;Luis P. Garcia;Domingo Gimenez

  • Affiliations:
  • -;-;-

  • Venue:
  • PDP '12 Proceedings of the 2012 20th Euromicro International Conference on Parallel, Distributed and Network-based Processing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The most computationally demanding scientific and engineering problems are solved with large parallel systems. In some cases those systems are Non-Uniform Memory Access multiprocessors made up of a large number of cores which share a hierarchically organized memory. Basic linear algebra routines of the type of BLAS typically constitute the kernel of the computation for those problems, and the efficient use of these routines in those systems would contribute to a faster solution of a large range of scientific problems. Normally some multithreaded BLAS library optimized for the system is used, but when the number of cores increases the degradation in the performance is significant, and this can produce a misuse of the large, expensive systems. This paper empirically analyses the behaviour in large NUMA systems of the matrix multiplication of the BLAS library, and its combination with OpenMP to obtain nested parallelism. With the auto-tuning method proposed in this work, a reduction in the execution time is achieved with respect to the matrix multiplication of the library.