Concurrent function evaluations in local and global optimization
Computer Methods in Applied Mechanics and Engineering
An asynchronous parallel Newton method
Mathematical Programming: Series A and B
A perturbed parallel decomposition method for a class of nonsmooth convex minimization problems
SIAM Journal on Control and Optimization
A parallel asynchronous Newton algorithm for unconstrained optimization
Journal of Optimization Theory and Applications
Convergence and numerical results for a parallel asynchronous quasi-Newton method
Journal of Optimization Theory and Applications
Parallel Gradient Distribution in Unconstrained Optimization
SIAM Journal on Control and Optimization
A parallel descent algorithm for convex programming
Computational Optimization and Applications
New Inexact Parallel Variable Distribution Algorithms
Computational Optimization and Applications
Impact of Partial Separability on Large-Scale Optimization
Computational Optimization and Applications
A new training algorithm for multilayer discrete perceptrons
Neural, Parallel & Scientific Computations
Testing Unconstrained Optimization Software
ACM Transactions on Mathematical Software (TOMS)
Parallel Variable Transformation in Unconstrained Optimization
SIAM Journal on Optimization
Parallel Variable Distribution for Constrained Optimization
Computational Optimization and Applications
On a second order parallel variable transformation approach
The Korean Journal of Computational & Applied Mathematics
Hi-index | 0.00 |
Three parallel space-decomposition minimization (PSDM) algorithms, based on the parallel variable transformation (PVT) and the parallel gradient distribution (PGD) algorithms (O.L. Mangasarian, SIMA Journal on Control and Optimization, vol. 33, no. 6, pp. 1916–1925.), are presented for solving convex or nonconvex unconstrained minimization problems. The PSDM algorithms decompose the variable space into subspaces and distribute these decomposed subproblems among parallel processors. It is shown that if all decomposed subproblems are uncoupled of each other, they can be solved independently. Otherwise, the parallel algorithms presented in this paper can be used. Numerical experiments show that these parallel algorithms can save processor time, particularly for medium and large-scale problems. Up to six parallel processors are connected by Ethernet networks to solve four large-scale minimization problems. The results are compared with those obtained by using sequential algorithms run on a single processor. An application of the PSDM algorithms to the training of multilayer Adaptive Linear Neurons (Madaline) and a new parallel architecture for such parallel training are also presented.