Performance and Scalability Evaluation of 'Big Memory' on Blue Gene Linux

  • Authors:
  • Kazutomo Yoshii;Kamil Iskra;Harish Naik;Pete Beckman;P. Chris Broekema

  • Affiliations:
  • Mathematics and Computer Science Division, Argonne NationalLaboratory, Argonne, IL, USA;Mathematics and Computer Science Division, Argonne NationalLaboratory, Argonne, IL, USA;Mathematics and Computer Science Division, Argonne NationalLaboratory, Argonne, IL, USA;Mathematics and Computer Science Division, Argonne NationalLaboratory, Argonne, IL, USA, Leadership Computing Facility, Argonne National Laboratory,Argonne, IL, USA;ASTRON, Netherlands Institute for Radio Astronomy, Dwingeloo,The Netherlands

  • Venue:
  • International Journal of High Performance Computing Applications
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We address memory performance issues observed in Blue Gene Linux and discuss the design and implementation of â聙聵Big Memoryâ聙聶â聙聰â聙聰an alternative, transparent memory space introduced to eliminate the memory performance issues. We evaluate the performance of Big Memory using custom memory benchmarks, NAS Parallel Benchmarks, and the Parallel Ocean Program, at a scale of up to 4,096 nodes. We find that Big Memory successfully resolves the performance issues normally encountered in Blue Gene Linux. For the ocean simulation program, we even find that Linux with Big Memory provides better scalability than does the lightweight compute node kernel designed solely for high-performance applications. Originally intended exclusively for compute node tasks, our new memory subsystem dramatically improves the performance of certain I/O node applications as well. We demonstrate this performance using the central processor of the LOw Frequency ARray radio telescope as an example.