An approach to locality-conscious load balancing and transparent memory hierarchy management with a global- address-space parallel programming model

  • Authors:
  • Sriram Krishnamoorthy;Umit Catalyurek;Jarek Nieplocha;P. Sadayappan

  • Affiliations:
  • Dept. of Computer Science and Engineering, The Ohio State University;Dept. of Biomedical Informatics, The Ohio State University;Pacific Northwest National Laboratory;Dept. of Computer Science and Engineering, The Ohio State University

  • Venue:
  • IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The development of efficient parallel out-of-core applications is often tedious, because of the need to explicitly manage the movement of data between files and data structures of the parallel program. Several large-scale applications require multiple passes of processing over data too large to fit in memory, where significant concurrency exists within each pass. This paper describes a global-address-space framework for the convenient specification and efficient execution of parallel out-of-core applications operating on block-sparse data. The programming model provides a global view of block-sparse matrices and a mechanism for the expression of parallel tasks that operate on blocksparse data. The tasks are automatically partitioned into phases that operate on memory-resident data, and mapped onto processors to optimize load balance and data locality. Experimental results are presented that demonstrate the utility of the approach.