Comparative Fault Tolerance of Parallel Distributed Processing Networks
IEEE Transactions on Computers
Training with noise is equivalent to Tikhonov regularization
Neural Computation
Fault tolerance of feedforward artificial neural nets and synthesis of robust nets
Fault tolerance of feedforward artificial neural nets and synthesis of robust nets
Complete and partial fault tolerance of feedforward neural nets
IEEE Transactions on Neural Networks
Robustness of radial basis functions
Neurocomputing
A global model for fault tolerance of feedforward neural networks
ICAI'08 Proceedings of the 9th WSEAS International Conference on International Conference on Automation and Information
Letters: Prediction error of a fault tolerant neural network
Neurocomputing
Fault Tolerance Improvement through Architecture Change in Artificial Neural Networks
ISICA '08 Proceedings of the 3rd International Symposium on Advances in Computation and Intelligence
IEEE Transactions on Neural Networks
On the selection of weight decay parameter for faulty networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Particular levels of partial fault tolerance (PFT) in feedforward artificial neural networks of a given size can be obtained by redundancy (replicating a smaller normally trained network), by design (training specifically to increase PFT), and by a combination of the two (replicating a smaller PFT-trained network). This letter investigates the method of achieving the highest PFT per network size (total number of units and connections) for classification problems. It concludes that for nontoy problems, there exists a normally trained network of optimal size that produces the smallest fully fault-tolerant network when replicated. In addition, it shows that for particular network sizes, the best level of PFT is achieved by training a network of that size for fault tolerance. The results and discussion demonstrate how the outcome depends on the levels of saturation of the network nodes when classifying data points. With simple training tasks, where the complexity of the problem and the size of the network are well within the ability of the training method, the hidden-layer nodes operate close to their saturation points, and classification is clean. Under such circumstances, replicating the smallest normally trained correct network yields the highest PFT for any given network size. For hard training tasks (difficult classification problems or network sizes close to the minimum), normal training obtains networks that do not operate close to their saturation points, and outputs are not as close to their targets. In this case, training a larger network for fault tolerance yields better PFT than replicating a smaller, normally trained network. However, since fault-tolerant training on its own produces networks that operate closer to their linear areas than normal training, replicating normally trained networks ultimately leads to better PFT than replicating fault-tolerant networks of the same initial size.