Optimizing latency and throughput for spawning processes on massively multicore processors

  • Authors:
  • Abhishek Kulkarni;Andrew Lumsdaine;Michael Lang;Latchesar Ionkov

  • Affiliations:
  • Indiana University, Bloomington, IN;Indiana University, Bloomington, IN;Los Alamos National Laboratory, Los Alamos, NM;Los Alamos National Laboratory, Los Alamos, NM

  • Venue:
  • Proceedings of the 2nd International Workshop on Runtime and Operating Systems for Supercomputers
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The execution of a SPMD application involves running multiple instances of a process with possibly varying arguments. With the widespread adoption of massively multicore processors, there has been a focus towards harnessing the abundant compute resources effectively in a power-efficient manner. Although much work has been done towards optimizing distributed process launch using hierarchical techniques, there has been a void in studying the performance of spawning processes within a single node. Reducing the latency to spawn a new process locally results in faster global job launch. Further, emerging dynamic and resilient execution models are designed on the premise of maintaining process pools for fault isolation and launching several processes in a relatively shorter period of time. Optimizing the latency and throughput for spawning processes would help improve the overall performance of runtime systems, allow adaptive process-replication reliability and motivate the design and implementation of process management interfaces in future manycore operating systems. In this paper, we study the several limiting factors for efficient spawning of processes on massively multicore architectures. We have developed a library to optimize launching multiple instances of the same executable. Our microbenchmarks show a 20-80% decrease in the process spawn time for multiple executables. We further discuss the effects of memory locality and propose NUMA-aware extensions to optimize launching processes with large memory-mapped segments including dynamic shared libraries. Finally, we describe vector operating system interfaces for spawning a batch of processes from a given executable on specific cores. Our results show a 50x speedup over the traditional method of launching new processes using fork and exec system calls.