Parallel and distributed computation: numerical methods
Parallel and distributed computation: numerical methods
CUTE: constrained and unconstrained testing environment
ACM Transactions on Mathematical Software (TOMS)
PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing
PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing
Parallel Gradient Distribution in Unconstrained Optimization
SIAM Journal on Control and Optimization
New Inexact Parallel Variable Distribution Algorithms
Computational Optimization and Applications
`` Direct Search'' Solution of Numerical and Statistical Problems
Journal of the ACM (JACM)
Testing Unconstrained Optimization Software
ACM Transactions on Mathematical Software (TOMS)
Frame based methods for unconstrained optimization
Journal of Optimization Theory and Applications
Direct search methods: then and now
Journal of Computational and Applied Mathematics - Special issue on numerical analysis 2000 Vol. IV: optimization and nonlinear equations
A guide to MATLAB: for beginners and experienced users
A guide to MATLAB: for beginners and experienced users
Asynchronous Parallel Pattern Search for Nonlinear Optimization
SIAM Journal on Scientific Computing
New Sequential and Parallel Derivative-Free Algorithms for Unconstrained Minimization
SIAM Journal on Optimization
On the Global Convergence of Derivative-Free Methods for Unconstrained Optimization
SIAM Journal on Optimization
On the Convergence of Grid-Based Methods for Unconstrained Optimization
SIAM Journal on Optimization
On the Convergence of Pattern Search Algorithms
SIAM Journal on Optimization
Convergence of the Nelder--Mead Simplex Method to a Nonstationary Point
SIAM Journal on Optimization
SIAM Journal on Optimization
Parallel Variable Transformation in Unconstrained Optimization
SIAM Journal on Optimization
A convergent variant of the Nelder-Mead algorithm
Journal of Optimization Theory and Applications
On the Convergence of Asynchronous Parallel Pattern Search
SIAM Journal on Optimization
Hi-index | 0.00 |
Direct search optimization algorithms are becoming an important alternative to well-established gradient based methods. Due to the fact that a single cost function evaluation may take a substantial amount of time, optimization can be a lengthy process. In order to shorten the run time one often resorts to parallel algorithms. Asynchronous algorithms are particularly efficient since they have no synchronization points. This paper is an attempt to establish a convergence theory for a class of such parallel direct search algorithms. The notion of a search direction generator (SDG) is introduced. An algorithmic framework for parallel distributed optimization methods based on SDGs is presented along with the corresponding convergence theory. The theory almost completely decouples the stepsize control from the sufficient descent requirement, which is necessary for the finite termination of the algorithm's inner loop. The proposed framework has several attributes considered very favourable in loosely coupled parallel systems (e.g. clusters of workstations), such as fault tolerance and scalability. The framework is illustrated by optimizing a set of test problems on a cluster of workstations. In all tested cases, a speedup was obtained that increased with the increasing number of workstations. Fault tolerance and scalability of the framework were also demonstrated by removing and adding workstations to the cluster while an optimization run was in progress.