A pruning method for the recursive least squared algorithm
Neural Networks
Investigating the Fault Tolerance of Neural Networks
Neural Computation
A smoothing regularizer for feedforward and recurrent neural networks
Neural Computation
Sparse modeling using orthogonal forward regression with PRESS statistic and regularization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Asymptotic statistical theory of overtraining and cross-validation
IEEE Transactions on Neural Networks
On the regularization of forgetting recursive least square
IEEE Transactions on Neural Networks
Maximally fault tolerant neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Complete and partial fault tolerance of feedforward neural nets
IEEE Transactions on Neural Networks
Fault-tolerant training for optimal interpolative nets
IEEE Transactions on Neural Networks
Hi-index | 0.02 |
Prediction error is a powerful tool that measures the performance of a neural network. In this paper, we extend the technique to a kind of fault tolerant neural networks. Considering a neural network with multiple-node fault, we derive its generalized prediction error. Hence, the effective number of parameters of such a fault tolerant neural network is obtained. The difficulty in obtaining the mean prediction error is discussed. Finally, a simple procedure for estimation of the prediction error is empirically suggested.