A distributed parallel genetic algorithm for solving optimal growth models
Computational Economics - Special issue: genetic algorithms
Maximum Likelihood Estimation Using Parallel Computing: An Introduction to MPI
Computational Economics
Parallelizing the Dual Simplex Method
INFORMS Journal on Computing
User-Friendly Parallel Computations with Econometric Examples
Computational Economics
A parallel primal-dual simplex algorithm
Operations Research Letters
Multi-core CPUs, Clusters, and Grid Computing: A Tutorial
Computational Economics
Optimization in Non-Standard Problems. An Application to the Provision of Public Inputs
Computational Economics
Yahoo! music recommendations: modeling music ratings with temporal dynamics and item taxonomy
Proceedings of the fifth ACM conference on Recommender systems
Local information guided autonomous exploration in sensor networks: Algorithms and experiments
Computer Communications
Material parameter identification with parallel processing and geo-applications
PPAM'11 Proceedings of the 9th international conference on Parallel Processing and Applied Mathematics - Volume Part I
Hi-index | 0.00 |
This paper generalizes the widely used Nelder and Mead (Comput J 7:308---313, 1965) simplex algorithm to parallel processors. Unlike most previous parallelization methods, which are based on parallelizing the tasks required to compute a specific objective function given a vector of parameters, our parallel simplex algorithm uses parallelization at the parameter level. Our parallel simplex algorithm assigns to each processor a separate vector of parameters corresponding to a point on a simplex. The processors then conduct the simplex search steps for an improved point, communicate the results, and a new simplex is formed. The advantage of this method is that our algorithm is generic and can be applied, without re-writing computer code, to any optimization problem which the non-parallel Nelder---Mead is applicable. The method is also easily scalable to any degree of parallelization up to the number of parameters. In a series of Monte Carlo experiments, we show that this parallel simplex method yields computational savings in some experiments up to three times the number of processors.