Identification and control of dynamical systems using neural networks
IEEE Transactions on Neural Networks
A new class of quasi-Newtonian methods for optimal learning in MLP-networks
IEEE Transactions on Neural Networks
Genetic evolution of the topology and weight distribution of neural networks
IEEE Transactions on Neural Networks
Hi-index | 12.05 |
A two-stage algorithm combining the advantages of adaptive genetic algorithm and modified Newton method is developed for effective training in feedforward neural networks. The genetic algorithm with adaptive reproduction, crossover, and mutation operators is to search for initial weight and bias of the neural network, while the modified Newton method, similar to BFGS algorithm, is to increase network training performance. The benchmark tests show that the two-stage algorithm is superior to many conventional ones: steepest descent, steepest descent with adaptive learning rate, conjugate gradient, and Newton-based methods and is suitable to small network in engineering applications. In addition to numerical simulation, the effectiveness of the two-stage algorithm is validated by experiments of system identification and vibration suppression.