Behavioral Fault Model for Neural Networks
ICCET '09 Proceedings of the 2009 International Conference on Computer Engineering and Technology - Volume 02
On the regularization of forgetting recursive least square
IEEE Transactions on Neural Networks
Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks
IEEE Transactions on Neural Networks
A Fault-Tolerant Regularizer for RBF Networks
IEEE Transactions on Neural Networks
Complete and partial fault tolerance of feedforward neural nets
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Weight decay is a simple regularization method to improve the generalization ability of multilayered perceptrons (MLPs). Besides, the weight decay method can also improve the fault tolerance of MLPs. However, most existing generalization error results of using the weight decay method focus on fault-free MLPs only. For faulty MLPs, using a test set to study the generalization ability is not practice because there are huge number of possible faulty networks for a trained network. This paper develops a prediction error formula for predicting the performance of faulty MLPs. Our prediction error results allows us to select an appropriate model for MLPs under open node fault situation.