A Parallel Implementation of the Simplex Function Minimization Routine

  • Authors:
  • Donghoon Lee;Matthew Wiswall

  • Affiliations:
  • Department of Economics, New York University, New York, USA 10003;Department of Economics, New York University, New York, USA 10003

  • Venue:
  • Computational Economics
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper generalizes the widely used Nelder and Mead (Comput J 7:308---313, 1965) simplex algorithm to parallel processors. Unlike most previous parallelization methods, which are based on parallelizing the tasks required to compute a specific objective function given a vector of parameters, our parallel simplex algorithm uses parallelization at the parameter level. Our parallel simplex algorithm assigns to each processor a separate vector of parameters corresponding to a point on a simplex. The processors then conduct the simplex search steps for an improved point, communicate the results, and a new simplex is formed. The advantage of this method is that our algorithm is generic and can be applied, without re-writing computer code, to any optimization problem which the non-parallel Nelder---Mead is applicable. The method is also easily scalable to any degree of parallelization up to the number of parameters. In a series of Monte Carlo experiments, we show that this parallel simplex method yields computational savings in some experiments up to three times the number of processors.