Codesign Tradeoffs for High-Performance, Low-Power Linear Algebra Architectures

  • Authors:
  • Ardavan Pedram;Robert A. van de Geijn;Andreas Gerstlauer

  • Affiliations:
  • The University of Texas at Austin, Austin;The University of Texas at Austin, Austin;The University of Texas at Austin, Austin

  • Venue:
  • IEEE Transactions on Computers
  • Year:
  • 2012

Quantified Score

Hi-index 14.98

Visualization

Abstract

As technology is reaching physical limits, reducing power consumption is a key issue on our path to sustained performance. In this paper, we study fundamental tradeoffs and limits in efficiency (as measured in energy per operation) that can be achieved for an important class of kernels, namely the level-3 Basic Linear Algebra Subprograms (BLAS). It is well-accepted that specialization is the key to efficiency. This paper establishes a baseline by studying GEneral Matrix-matrix Multiplication (GEMM) on a variety of custom and general-purpose CPU and GPU architectures. Our analysis shows that orders of magnitude improvements in efficiency are possible with relatively simple customizations and fine-tuning of memory hierarchy configurations. We argue that these customizations can be generalized to perform other representative linear algebra operations. In addition to exposing the sources of inefficiencies in current CPUs and GPUs, our results show our prototype Linear Algebra Processor (LAP) implementing Double-precision GEMM (DGEMM) can achieve 600 GFLOPS while consuming less than 25 Watts in standard 45 nm technology, which is up to 50\times more energy efficient than cutting-edge CPUs.