Two highly efficient second-order algorithms for training feedforward networks
IEEE Transactions on Neural Networks
Study of multiscale global optimization based on parameter space partition
Journal of Global Optimization
Hi-index | 0.00 |
Conventional global optimization algorithms accept a new value that deteriorates the object function at a certain probability to avoid the local optimum. But the computation cost is much higher and the global optimum is not guaranteed. A Neighbourhood Determination (ND) global optimization method was developed in this study. The local optimum neighbourhood boundary is determined and divided by a Wave Front Propagation (WFP) method. Further optimizations are needed only in the irregular-shaped undivided domain. The relationship between the initial solution and the final optimum neighbourhood is used to train the Levenberg-Marquardt neural network. Instead of the costly WFP algorithm, an artificial neural network predicts a new neighbourhood in the remaining domain with dramatically increased efficiency. As the definition domain is completely divided into different subdomains, the actual global optimal solution is found. Numerical examples demonstrate the high efficiency, global searching ability, robustness and stability of this method.