Accelerating Value-at-Risk estimation on highly parallel architectures

  • Authors:
  • M. F. Dixon;J. Chong;K. Keutzer

  • Affiliations:
  • Department of Mathematics, University of California, One Shields Avenue, Davis, CA95616, USA;Parasians LLC, 258 Ficus Terrace, Sunnyvale, CA94086, USA and Department of Electrical Engineering and Computer Science, UC Berkeley, 576 Soda Hall, CA94720, USA;Department of Electrical Engineering and Computer Science, UC Berkeley, 576 Soda Hall, CA94720, USA

  • Venue:
  • Concurrency and Computation: Practice & Experience
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Values of portfolios in modern financial markets may change precipitously with changing market conditions. The utility of financial risk management tools is dependent on whether they can estimate Value-at-Risk (VaR) of portfolios on-demand when key decisions need to be made. However, VaR estimation of portfolios uses the Monte Carlo method, which is a computationally intensive method often run as an overnight batch job. With the proliferation of highly parallel computing platforms such as multicore CPUs and manycore graphics processing units (GPUs), teraFLOPS of computation capability is now available on a desktop computer, enabling the VaR of large portfolios with thousands of risk factors to be computed within only a fraction of a second. Achieving such performance in practice requires the assimilation of expertise in the following three areas: (i) application domain; (ii) statistical analytics; and (iii) parallel computing. This paper demonstrates that these areas of expertise inform optimization perspectives that, when combined, lead to 127×speedup on our CPU-based implementation and 538×speedup on our GPU-based implementation. Copyright © 2011 John Wiley & Sons, Ltd.