Automatic tuning of the sparse matrix vector product on GPUs based on the ELLR-T approach

  • Authors:
  • Francisco VáZquez;José JesúS FernáNdez;Ester M. GarzóN

  • Affiliations:
  • Almeria University, Dpt Computer Architecture and Electronics, Ctra San Urbano s/n Cañada, 04120 Almeria, Spain;Almeria University, Dpt Computer Architecture and Electronics, Ctra San Urbano s/n Cañada, 04120 Almeria, Spain and Centro Nacional de Biotecnologia (CNB-CSIC), Darwin 3, Campus de Cantoblanc ...;Almeria University, Dpt Computer Architecture and Electronics, Ctra San Urbano s/n Cañada, 04120 Almeria, Spain

  • Venue:
  • Parallel Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

A wide range of applications in engineering and scientific computing are involved in the acceleration of the sparse matrix vector product (SpMV). Graphics Processing Units (GPUs) have recently emerged as platforms that yield outstanding acceleration factors. SpMV implementations for GPUs have already appeared on the scene. This work is focused on the ELLR-T algorithm to compute SpMV on GPU architecture, its performance is strongly dependent on the optimum selection of two parameters. Therefore, taking account that the memory operations dominate the performance of ELLR-T, an analytical model is proposed in order to obtain the auto-tuning of ELLR-T for particular combinations of sparse matrix and GPU architecture. The evaluation results with a representative set of test matrices show that the average performance achieved by auto-tuned ELLR-T by means of the proposed model is near to the optimum. A comparative analysis of ELLR-T against a variety of previous proposals shows that ELLR-T with the estimated configuration reaches the best performance on GPU architecture for the representative set of test matrices.