On reducing i/o overheads in large-scale invariant subspace projections

  • Authors:
  • Hasan Metin Aktulga;Chao Yang;Ümit V. Çatalyürek;Pieter Maris;James P. Vary;Esmond G. Ng

  • Affiliations:
  • Lawrence Berkeley National Laboratory, Berkeley, CA;Lawrence Berkeley National Laboratory, Berkeley, CA;The Ohio State University, Columbus, OH;Iowa State University, Ames, IA;Iowa State University, Ames, IA;Lawrence Berkeley National Laboratory, Berkeley, CA

  • Venue:
  • Euro-Par'11 Proceedings of the 2011 international conference on Parallel Processing
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Obtaining highly accurate predictions on properties of light atomic nuclei using the Configuration Interaction (CI) method requires computing the lowest eigenvalues and associated eigenvectors of a large many-body nuclear Hamiltonian, H. One particular approach, the J-scheme, requires the projection of the H matrix into an invariant subspace. Since the matrices can be very large, enormous computing power is needed while significant stresses are put on the memory and I/O sub-systems. By exploiting the inherent localities in the problem and making use of the MPI one-sided communication routines backed by RDMA operations available in the new parallel architectures, we show that it is possible to reduce the I/O overheads drastically for large problems. This is demonstrated in the subspace projection phase of J-scheme calculations on 6Li nucleus, where our new implementation based on one-sided MPI communications outperforms the previous I/O based implementation by almost a factor of 10.