Advances in neural information processing systems 2
Neural networks and the bias/variance dilemma
Neural Computation
An Accurate Measure for Multilayer Perceptron Tolerance to Weight Deviations
Neural Processing Letters
Obtaining Fault Tolerant Multilayer Perceptrons Using an Explicit Regularization
Neural Processing Letters
A Simple Trick for Estimating the Weight Decay Parameter
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Studies of model selection and regularization for generalization in neural networks with applications
Investigating the Fault Tolerance of Neural Networks
Neural Computation
A smoothing regularizer for feedforward and recurrent neural networks
Neural Computation
Regularization in the selection of radial basis function centers
Neural Computation
Behavioral Fault Model for Neural Networks
ICCET '09 Proceedings of the 2009 International Conference on Computer Engineering and Technology - Volume 02
A Low-Cost Fault-Tolerant Approach for Hardware Implementation of Artificial Neural Networks
ICCET '09 Proceedings of the 2009 International Conference on Computer Engineering and Technology - Volume 02
Regularization parameter estimation for feedforward neural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Sparse modeling using orthogonal forward regression with PRESS statistic and regularization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Asymptotic statistical theory of overtraining and cross-validation
IEEE Transactions on Neural Networks
On the regularization of forgetting recursive least square
IEEE Transactions on Neural Networks
Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks
IEEE Transactions on Neural Networks
Feedforward sigmoidal networks - equicontinuity and fault-tolerance properties
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A Fault-Tolerant Regularizer for RBF Networks
IEEE Transactions on Neural Networks
The selection of weight accuracies for Madalines
IEEE Transactions on Neural Networks
Complete and partial fault tolerance of feedforward neural nets
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
The weight-decay technique is an effective approach to handle overfitting and weight fault. For fault-free networks, without an appropriate value of decay parameter, the trained network is either overfitted or underfitted. However, many existing results on the selection of decay parameter focus on fault-free networks only. It is well known that the weight-decay method can also suppress the effect of weight fault. For the faulty case, using a test set to select the decay parameter is not practice because there are huge number of possible faulty networks for a trained network. This paper develops two mean prediction error (MPE) formulae for predicting the performance of faulty radial basis function (RBF) networks. Two fault models, multiplicative weight noise and open weight fault, are considered. Our MPE formulae involve the training error and trained weights only. Besides, in our method, we do not need to generate a huge number of faulty networks to measure the test error for the fault situation. The MPE formulae allow us to select appropriate values of decay parameter for faulty networks. Our experiments showed that, although there are small differences between the true test errors (from the test set) and the MPE values, the MPE formulae can accurately locate the appropriate value of the decay parameter for minimizing the true test error of faulty networks.