Expert Systems with Applications: An International Journal
A parallel levenberg-marquardt algorithm
Proceedings of the 23rd international conference on Supercomputing
Comparative study of fuzzy methods for response integration in ensemble neural networks
International Journal of Advanced Intelligence Paradigms
Parallel fuzzy c-means cluster analysis
VECPAR'06 Proceedings of the 7th international conference on High performance computing for computational science
Adaptive self-scaling non-monotone BFGS training algorithm for recurrent neural networks
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
A parallel evolving algorithm for flexible neural tree
Parallel Computing
Hi-index | 0.00 |
In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of these directions is selectively chosen from a representative class of QN methods. Inexact line searches are then carried out to estimate the minimum point along each search direction. The proposed parallel algorithms are tested over a set of nine benchmark problems. Computational results show that the proposed algorithms outperform other existing methods, which are evaluated over the same set of test problems.