Approximation and radial-basis-function networks
Neural Computation
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
A new EM-based training algorithm for RBF networks
Neural Networks
On the Kernel Widths in Radial-Basis Function Networks
Neural Processing Letters
Assessing the Noise Immunity and Generalization of Radial Basis Function Networks
Neural Processing Letters
Subspace Information Criterion for Model Selection
Neural Computation
Fast learning in networks of locally-tuned processing units
Neural Computation
IEEE Transactions on Neural Networks
Sparse modeling using orthogonal forward regression with PRESS statistic and regularization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
On the regularization of forgetting recursive least square
IEEE Transactions on Neural Networks
Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks
IEEE Transactions on Neural Networks
A Fault-Tolerant Regularizer for RBF Networks
IEEE Transactions on Neural Networks
Complete and partial fault tolerance of feedforward neural nets
IEEE Transactions on Neural Networks
Adaptive kernel-width selection for kernel-based least-squares policy iteration algorithm
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part II
Simple estimate of the width in Gaussian kernel with adaptive scaling technique
Applied Soft Computing
Hi-index | 0.00 |
Many researches have been devoted to select the kernel parameters, including the centers, kernel width and weights, for fault-free radial basis function (RBF) neural networks. However, most are concerned with the centers and weights identification, and fewer focus on the kernel width selection. Moreover, to our knowledge, almost no literature has proposed the effective and applied method to select the optimal kernel width for faulty RBF neural networks. As is known that the node faults inevitably take place in real applications, which results in a great many of faulty networks, it will take a lot of time to calculate the mean prediction error (MPE) for the traditional method, i.e., the test set method. Thus, the letter derives a formula to estimate the MPE of each candidate width value and then use it to select the optimal one with the lowest MPE value for faulty RBF neural networks with multi-node open fault. Simulation results show that the chosen optimal kernel width by our proposed MPE formula is very close to the actual one by the conventional method. Moreover, our proposed MPE formula outperforms other selection methods used for fault-free neural networks.