Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural computing: an introduction
Neural computing: an introduction
A practical Bayesian framework for backpropagation networks
Neural Computation
Investigating the Fault Tolerance of Neural Networks
Neural Computation
A smoothing regularizer for feedforward and recurrent neural networks
Neural Computation
Regularization in the selection of radial basis function centers
Neural Computation
A Low-Cost Fault-Tolerant Approach for Hardware Implementation of Artificial Neural Networks
ICCET '09 Proceedings of the 2009 International Conference on Computer Engineering and Technology - Volume 02
IEEE Transactions on Signal Processing
Sparse modeling using orthogonal forward regression with PRESS statistic and regularization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Asymptotic statistical theory of overtraining and cross-validation
IEEE Transactions on Neural Networks
On the regularization of forgetting recursive least square
IEEE Transactions on Neural Networks
Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks
IEEE Transactions on Neural Networks
Multiple model regression estimation
IEEE Transactions on Neural Networks
Maximally fault tolerant neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training
IEEE Transactions on Neural Networks
Complete and partial fault tolerance of feedforward neural nets
IEEE Transactions on Neural Networks
Fault-tolerant training for optimal interpolative nets
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Fault tolerance is an important issue for multilayer feedforward networks (MFNs). However, in the classical training approach for open node fault and open weight fault, we should consider many potential faulty networks. Clearly, if the number of faulty networks considered in the objective function is large, this training approach would be very time consuming. This paper derives two objective functions for attaining fault tolerant MFNs. One objective function is designed for handling open node fault while another one is designed for handling open weight fault. With the linearization technique, each of these two objective functions can be decomposed into two terms, the training error and a simple regularization term. In our approach, the objective functions are computationally simple. Hence the conventional backpropagation algorithm can be simply applied to handle these fault tolerant objective functions.