Toward optimally distributed computation
Neural Computation
On the effect of analog noise in discrete-time analog computations
Neural Computation
An Accurate Measure for Multilayer Perceptron Tolerance to Weight Deviations
Neural Processing Letters
Obtaining Fault Tolerant Multilayer Perceptrons Using an Explicit Regularization
Neural Processing Letters
Improving the Performance of Feedforward Neural Networks by Noise Injection into Hidden Neurons
Journal of Intelligent and Robotic Systems
Perfect Fault Tolerance of the n-k-n Network
Neural Computation
Investigating the Fault Tolerance of Neural Networks
Neural Computation
2005 Special Issue: A regenerating spiking neural network
Neural Networks - 2005 Special issue: IJCNN 2005
Nanocomputing in the presence of defects and faults: a survey
Nano, quantum and molecular computing
Robustness of radial basis functions
Neurocomputing
A global model for fault tolerance of feedforward neural networks
ICAI'08 Proceedings of the 9th WSEAS International Conference on International Conference on Automation and Information
Letters: Prediction error of a fault tolerant neural network
Neurocomputing
Fault Tolerance Improvement through Architecture Change in Artificial Neural Networks
ISICA '08 Proceedings of the 3rd International Symposium on Advances in Computation and Intelligence
Learning Algorithms Which Make Multilayer Neural Networks Multiple-Weight-and-Neuron-Fault Tolerant
IEICE - Transactions on Information and Systems
IEEE Transactions on Neural Networks
Kernel Width Optimization for Faulty RBF Neural Networks with Multi-node Open Fault
Neural Processing Letters
On the selection of weight decay parameter for faulty networks
IEEE Transactions on Neural Networks
Generalization error of faulty MLPs with weight decay regularizer
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
Letters: Training RBF network to tolerate single node fault
Neurocomputing
Tolerance of radial basis functions against stuck-at-faults
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Prediction error of a fault tolerant neural network
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
A defect-tolerant accelerator for emerging high-performance applications
Proceedings of the 39th Annual International Symposium on Computer Architecture
On the objective function and learning algorithm for concurrent open node fault
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Hi-index | 0.00 |
A method is proposed to estimate the fault tolerance (FT) of feedforward artificial neural nets (ANNs) and synthesize robust nets. The fault model abstracts a variety of failure modes for permanent stuck-at type faults. A procedure is developed to build FT ANNs by replicating the hidden units. It exploits the intrinsic weighted summation operation performed by the processing units to overcome faults. Metrics are devised to quantify the FT as a function of redundancy. A lower bound on the redundancy required to tolerate all possible single faults is analytically derived. Less than triple modular redundancy (TMR) cannot provide complete FT for all possible single faults. The actual redundancy needed to synthesize a completely FT net is specific to the problem at hand and is usually much higher than that dictated by the general lower bound. The conventional TMR scheme of triplication and majority voting is the best way to achieve complete FT in most ANNs. Although the redundancy needed for complete FT is substantial, the ANNs exhibit good partial FT to begin with and degrade gracefully. The first replication yields maximum enhancement in partial FT compared with later successive replications. For large nets, exhaustive testing of all possible single faults is prohibitive, so the strategy of randomly testing a small fraction of the total number of links is adopted. It yields partial FT estimates that are very close to those obtained by exhaustive testing. When the fraction of links tested is held fixed, the accuracy of the estimate generated by random testing is seen to improve as the net size grows