Universal approximation using radial-basis-function networks
Neural Computation
An efficient MDL-based construction of RBF networks
Neural Networks
A global learing algorithm for a RBF network
Neural Networks
On-line learning in RBF neural networks: a stochastic approach
Neural Networks
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Journal of Global Optimization
A Fuzzy Adaptive Differential Evolution Algorithm
Soft Computing - A Fusion of Foundations, Methodologies and Applications
Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)
Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)
Growing RBF networks for function approximation by a DE-Based method
CIS'04 Proceedings of the First international conference on Computational and Information Science
Median radial basis function neural network
IEEE Transactions on Neural Networks
ICNC'09 Proceedings of the 5th international conference on Natural computation
Adaptive kernel-width selection for kernel-based least-squares policy iteration algorithm
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part II
International Journal of Systems Biology and Biomedical Technologies
Hi-index | 0.00 |
The Differential Evolution (DE) is a floating-point encoded evolutionary strategy for global optimization. It has been demonstrated to be an efficient, effective, and robust optimization method, especially for problems containing continuous variables. This paper concerns applying a DE-based algorithm to training Radial Basis Function (RBF) networks with variables including centres, weights, and widths of RBFs. The proposed algorithm consists of three steps: the first step is the initial tuning, which focuses on searching for the center, weight, and width of a one-node RBF network, the second step is the local tuning, which optimizes the three variables of the one-node RBF network --- its centre, weight, and width, and the third step is the global tuning, which optimizes all the parameters of the whole network together. The second step and the third step both use the cycling scheme to find the parameters of RBF network. The Mean Square Error from the desired to actual outputs is applied as the objective function to be minimized. Training the networks is demonstrated by approximating a set of functions, using different strategies of DE. A comparison of the net performances with several approaches reported in the literature is given and shows the resulting network performs better in the tested functions. The results show that proposed method improves the compared approximation results.