Nelder-Mead Simplex Optimization Routine for Large-Scale Problems: A Distributed Memory Implementation

  • Authors:
  • Kyle Klein;Julian Neira

  • Affiliations:
  • U.C. Santa Barbara, Santa Barbara, USA;U.C. Santa Barbara, Santa Barbara, USA

  • Venue:
  • Computational Economics
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Nelder-Mead simplex method is an optimization routine that works well with irregular objective functions. For a function of $$n$$ parameters, it compares the objective function at the $$n+1$$ vertices of a simplex and updates the worst vertex through simplex search steps. However, a standard serial implementation can be prohibitively expensive for optimizations over a large number of parameters. We describe an implementation of the Nelder-Mead method in parallel using a distributed memory. For $$p$$ processors, each processor is assigned $$(n+1)/p$$ vertices at each iteration. Each processor then updates its worst local vertices, communicates the results, and a new simplex is formed with the vertices from all processors. We also describe how the algorithm can be implemented with only two MPI commands. In simulations, our implementation exhibits large speedups and is scalable to large problem sizes.